You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently cudawrappers (obviously) only support CUDA. The suggestion is split the user interface from the backend, so we could have a CUDA backend as we currently do and add a HIP backend. This way, the user could write that could run either with CUDA directly, CUDA through HIP, or on AMD GPUs through HIP.
As a potential use case, I have a proof-of-concept tensor-core based beamformer currently written in both CUDA and HIP, which runs in all three ways described above. As the HIP and CUDA commands are basically identical apart from the prefix, this results in a lot of duplicated code that could be hidden very nicely by cudawrappers. Potential followup issue: come up with a new name for cudawrappers as it would no longer be cuda-only.
The text was updated successfully, but these errors were encountered:
Currently cudawrappers (obviously) only support CUDA. The suggestion is split the user interface from the backend, so we could have a CUDA backend as we currently do and add a HIP backend. This way, the user could write that could run either with CUDA directly, CUDA through HIP, or on AMD GPUs through HIP.
As a potential use case, I have a proof-of-concept tensor-core based beamformer currently written in both CUDA and HIP, which runs in all three ways described above. As the HIP and CUDA commands are basically identical apart from the prefix, this results in a lot of duplicated code that could be hidden very nicely by cudawrappers. Potential followup issue: come up with a new name for cudawrappers as it would no longer be cuda-only.
The text was updated successfully, but these errors were encountered: