-
-
Notifications
You must be signed in to change notification settings - Fork 51
Common interface between backends #26
Comments
Shared memory: I haven't put too much thought in a good language construct, hence the current macro, but I'd be willing to replace it with something more portable between CUDAnative and GLSL. Problem is, I'm abusing What do you propose? Could you elaborate on the Intrinsics: I need something similar in order to specialize for hardware generations, eg. for hardware-specific intrinsics, optimized implementations (like the Keppler-specific reduction), or to allow just using |
|
No, as we can propagate/infer address space information at the LLVM level, we just need to have proper AS info at the 'end' (ie. where we do an I meant the support to introduce globals in the LLVM module. I think this is going to be covered by vtjnash's |
Closing some of the speculative/too ambitious issues. I don't think it should be CUDAnative's task to both support all of CUDA C, while exporting it through a shared interface. That could be tackled by a Plots.jl-like package. |
Now that I have a working prototype for GLSL transpilation, it'd be nice to have the same julia code compile to GLSL and CUDAnative without hassle!
Shared Memory
In GLSL it seems keywords like
shared
a just one keyword from a set of other keywords. So I had the idea of having to create an intrinsic typeQualified{Qualifier, Type}
.So you could create shared memory like this:
I'm not sure how well this can work with CUDAnatives code generation...
intrinsics
There are a lot of shared intrinsics like memory barriers, work group index getters etc.
The problem with them is, that we'd need to dispatch on some backend type to allow to select the correct intrinsic name for the backend.
I could in theory just mirror the cuda names, since I go through the Julia code anyways and can just replace them with the correct names for GLSL.
Any thoughts on this?
The text was updated successfully, but these errors were encountered: