-
-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when trying to use nvidia 5090 #308
Comments
To run on RTX5090, torch and torchvision built with CUDA 12.8 are required. |
Hello, nagadomi, First of all, thank you for the great work you do for the community. I can confirm that the libraries for PyTorch 1.13.10 on Windows with support for CUDA 12.8 have already been released, but the issue persists. What would you recommend to make the program work correctly? Thanks in advance! |
cu128 is only available for torch 2.7. And so now it just got available. The installation steps are probably as follows. (obviously I haven't tried it yet)
Also note that if you run |
I have already installed Torch 2.7, and the CUDA 12.8 version is also ready. However, I’m still missing torchvision. The command you suggested doesn’t work for me. Where can I check for updates on torchvision to stay up to date on its availability? Thanks!! |
The above command will not work until torchvision cu128 is released. When |
Also unoffical builds may exist, but be careful security risk. |
torchvision cu128 for Windows is now available. You can also comment out the following line in nunif/windows_package/update.bat Line 46 in e830a9a
to
|
That seems to be working, thanks! So much faster than using the cpu! |
CUDA error: no kernel image is available for execution on the device. Compile with "TORCH_USE_CUDA_DSA" to enable device-side assertions... :( :( :( |
Finally got it working! Huge thanks to GROK3 from xAI for guiding me step-by-step through installing the nightly PyTorch builds (torch-2.7.0.dev20250225+cu128 and torchvision-0.22.0.dev20250226+cu128) from local .whl files and resolving dependency issues with lpips and timm. Couldn’t have done it without the detailed help! |
When trying to use my graphics card I get an error:
RuntimeError:
CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
I am running this on windows, using cpu works, but is very slow. Would love to use my 5090.
The text was updated successfully, but these errors were encountered: