Skip to content

feat: intel xpu register #1260

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from
Draft

feat: intel xpu register #1260

wants to merge 4 commits into from

Conversation

avik-pal
Copy link
Collaborator

@avik-pal avik-pal commented May 6, 2025

we should really be doing a jll, but for the time being...

@giordano
Copy link
Member

giordano commented May 6, 2025

On my laptop I have both Nvidia and Intel GPUs, I'm curious to see how this would work 😅

@avik-pal
Copy link
Collaborator Author

avik-pal commented May 6, 2025

https://buildkite.com/julialang/reactant-dot-jl/builds/8739/steps?jid=0196a394-3ddb-4972-9e75-a57a92eae6c1#0196a394-3ddb-4972-9e75-a57a92eae6c1/273-692 this is probably why we should do a jll 😓. Shipping https://github.com/JuliaGPU/oneAPI.jl/blob/28d7eedb553831e1d8bc82919808d1106220843b/Project.toml#L27C1-L29 should fix this, I think?

On my laptop I have both Nvidia and Intel GPUs, I'm curious to see how this would work 😅

That is same as my setup. This should ideally just register both the intel and nvidia gpus with reactant but use nvidia GPU by default unless we do Reactant.set_default_backend("sycl").

@avik-pal
Copy link
Collaborator Author

avik-pal commented May 6, 2025

Can confirm that even the python intel-extension-for-openxla doesn't ship with all the required libraries. But installing the support_jll and loader_jll gets us all the ones needed there

@avik-pal avik-pal force-pushed the ap/intel_registration_code branch from 8a3d680 to 94bf2b4 Compare May 6, 2025 14:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants