-
Notifications
You must be signed in to change notification settings - Fork 333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using multiple devices #755
Comments
It does not mean computation is performed on multiple devices. IMHO, the most important thing about this is that memory (buffers) allocated in an OpenCL context can be used on all devices within the same context. Boost.Compute context class is just a wrapper for OpenCL context. I encourage you to read OpenCL documentation and check what is the exact relationship between a context and a device in OpenCL. |
The distinction is clear to me, I just wasn't sure what was implemented here. I wanted to simply confirm whether or not the computations are not distributed across devices automatically. I am interested in using multiple devices to scale computations even further than leveraging a single GPU. |
I was working on distributed algorithms in here: #644, but after August 2016 I haven't had enough free time to refactor and finish it. |
@jszuppe do you have a roadmap or anything delineating what else is required to finish it? I am quite busy as well but perhaps I can lend a hand when I manage to have some availability. IMHO this would be a very important addition that many would find useful. |
What I need to finish it is distributed iterator class. The idea was to have a iterator class for multiple buffers which can be templated with fancy iterators that are in Boost.Compute to get a fancy iterator (transform, zip etc.) that works with multiple buffers. I would have to take a look at the code. |
@cdeterman hi,Do you have any idea about how to use multiple GPUS? |
@lovingxiuxiu I have seen multiple different implementations of specific things in different papers. Hence my initial interest here where many things would be consolidated instead of implementing them all myself. So I have a rough idea but I am by no means an expert. |
I see there is a previously closed issue regarding multiple devices in a context here. Does that imply that computations are applied across multiple devices or just that multiple devices are available to be used independently? I can't quite tell from the documentation if leveraging multiple devices is implemented here.
The text was updated successfully, but these errors were encountered: