Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CUDA/HIP implementations of reduction operators #12569
base: main
Are you sure you want to change the base?
Add CUDA/HIP implementations of reduction operators #12569
Changes from all commits
4b8da14
13aeecf
bc5c3a1
c2c5aec
606f778
37c5dad
4d4d629
9fe6351
60cc5aa
46fbda1
730102b
c200c02
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding an entire new type of ompi_op just to cater to the need for a stream is kind of ugly. I understand the desire to make them as flexible as possible, but in the context of MPI we handle a very restricted number of streams, and we expect the MPI_Op to always execute in a single stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we will come to a point where the user can provide us with a stream. We would then operate on that stream, so it makes sense to pass a stream into the operator.
Are you suggesting we use a default stream across all of OMPI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The user might configure some streams in OMPI, but not a stream per invocation of an MPI_Op. A stream per communicator would be a good addition, and we will figure out how to pass it down to operations not using communicators (such as the MPI_Op). But adding it as an explicit argument creates two MPI_Op API.
I don't have a better idea right now, it is just that this approach requires too much code modification for very little added benefit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An alternative (the only I can think of) to explicit API pass-through is thread-local variables. That is hidden state, ugly and error-prone.
In fact, we want to have both host-side and device-side incarnations of ops side-by-side because we don't know whether the user will pass us host or device buffers. So even if they had the same signature we would want to store them separately. I'm not sure that it would simplify in any meaningful way then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I thought about thread-local but as you said it is error-prone and unsafe. I was more inclined toward a context-level storage solution, such as a communicator or maybe the collective itself, but something higher level than the MPI_Op. The reason is that at the end we will want to be able to orchestrate (and take advantage) of the dependencies between different parts of the same collective, and this is more natural if they share a stream.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The question of how the streams ends up in MPI is an interesting one (and I am favoring communicators as well). Somehow it needs to come from the high-level to the operator and I still favor the direct way of passing it as an argument.
I just realized that when adding the
opm_stream*
members I should probably bump the version of the struct?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you need to bump the version of the module/component struct as the other function pointer has the same signature.