Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using multiple devices #755

Open
cdeterman opened this issue Nov 30, 2017 · 7 comments
Open

Using multiple devices #755

cdeterman opened this issue Nov 30, 2017 · 7 comments

Comments

@cdeterman
Copy link

I see there is a previously closed issue regarding multiple devices in a context here. Does that imply that computations are applied across multiple devices or just that multiple devices are available to be used independently? I can't quite tell from the documentation if leveraging multiple devices is implemented here.

@jszuppe
Copy link
Contributor

jszuppe commented Nov 30, 2017

It does not mean computation is performed on multiple devices. IMHO, the most important thing about this is that memory (buffers) allocated in an OpenCL context can be used on all devices within the same context. Boost.Compute context class is just a wrapper for OpenCL context. I encourage you to read OpenCL documentation and check what is the exact relationship between a context and a device in OpenCL.

@cdeterman
Copy link
Author

The distinction is clear to me, I just wasn't sure what was implemented here. I wanted to simply confirm whether or not the computations are not distributed across devices automatically. I am interested in using multiple devices to scale computations even further than leveraging a single GPU.

@jszuppe
Copy link
Contributor

jszuppe commented Nov 30, 2017

I was working on distributed algorithms in here: #644, but after August 2016 I haven't had enough free time to refactor and finish it.

@cdeterman
Copy link
Author

@jszuppe do you have a roadmap or anything delineating what else is required to finish it? I am quite busy as well but perhaps I can lend a hand when I manage to have some availability. IMHO this would be a very important addition that many would find useful.

@jszuppe
Copy link
Contributor

jszuppe commented Nov 30, 2017

What I need to finish it is distributed iterator class. The idea was to have a iterator class for multiple buffers which can be templated with fancy iterators that are in Boost.Compute to get a fancy iterator (transform, zip etc.) that works with multiple buffers. I would have to take a look at the code.

@lovingxiuxiu
Copy link

@cdeterman hi,Do you have any idea about how to use multiple GPUS?

@cdeterman
Copy link
Author

@lovingxiuxiu I have seen multiple different implementations of specific things in different papers. Hence my initial interest here where many things would be consolidated instead of implementing them all myself. So I have a rough idea but I am by no means an expert.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants