Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct buffer passing #291

Open
leizaf opened this issue Nov 22, 2024 · 1 comment
Open

Direct buffer passing #291

leizaf opened this issue Nov 22, 2024 · 1 comment

Comments

@leizaf
Copy link

leizaf commented Nov 22, 2024

Hey yall I just found out about this project a few days ago, very cool stuff. Currently evaluating this for use in my team at Anduril. One of the main blockers tho is the lack of a easy way to directly pass in device buffers to kernels (I'm sure theres some not so straightforward ways) . There are several reasons to want to do this:

  • You are using cubecl in conjunction with some other library, tool, langauge, etc that produces arrays/tensors that are already on the GPU. You'll have no choice but to copy it back to the host, and then have the runtime copy it back in order to call a kernel on it.
  • You want to use unified / pinned memory, GPU RDMA, etc for performance reasons.

I've sketched out an implementation here and I'm looking for feedback before I get too deep. I've basically just added an ArrayArg variant that takes the Resource of the associated ComputeStorage. In the end you'll be able to mix it with normal cubecl arrays and it'll look something like this:

let device_buffer = CudaResource::from_device_pointer(...);

some_kernel::launch_unchecked::<...>(
    &client,
    CubeCount::Static(...),
    CubeDim::new(...),
    ArrayArg::from_raw_resource(device_buffer),
    ArrayArg::from_raw_parts(&normal_handle, ...),
);
@ArthurBrussee
Copy link
Contributor

I wonder if it would be possible to write helpers in the DLPack format. That seems to be the preferred interchange format between a bunch of frameworks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants