Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get rocm running on R9 390 series? #6680

Open
40476 opened this issue Feb 2, 2025 · 4 comments
Open

How to get rocm running on R9 390 series? #6680

40476 opened this issue Feb 2, 2025 · 4 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@40476
Copy link
Contributor

40476 commented Feb 2, 2025

Your question

Been pulling my hair out (figuratively of course) on how to get ROCm running on R9 390, I know its like 10 years old but my only problem with it is that I cannot get ROCm to work on it, I am open to alternative solutions to getting comfyUI on my GPU.

Logs

usr_40476@2xeon:~/APPS/ComfyUI> python3.12 main.py 
Traceback (most recent call last):
  File "/home/usr_40476/APPS/ComfyUI/main.py", line 91, in <module>
    import execution
  File "/home/usr_40476/APPS/ComfyUI/execution.py", line 13, in <module>
    import nodes
  File "/home/usr_40476/APPS/ComfyUI/nodes.py", line 21, in <module>
    import comfy.diffusers_load
  File "/home/usr_40476/APPS/ComfyUI/comfy/diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "/home/usr_40476/APPS/ComfyUI/comfy/sd.py", line 5, in <module>
    from comfy import model_management
  File "/home/usr_40476/APPS/ComfyUI/comfy/model_management.py", line 143, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
                                  ^^^^^^^^^^^^^^^^^^
  File "/home/usr_40476/APPS/ComfyUI/comfy/model_management.py", line 112, in get_torch_device
    return torch.device(torch.cuda.current_device())
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/usr_40476/.local/share/pipx/venvs/pip/lib64/python3.12/site-packages/torch/cuda/__init__.py", line 971, in current_device
    _lazy_init()
  File "/home/usr_40476/.local/share/pipx/venvs/pip/lib64/python3.12/site-packages/torch/cuda/__init__.py", line 319, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No HIP GPUs are available
usr_40476@2xeon:~/APPS/ComfyUI>

Other

No response

@40476 40476 added the User Support A user needs help with something, probably not a bug. label Feb 2, 2025
@patientx
Copy link

patientx commented Feb 3, 2025

you need rocm libraries for a start but as far as I know there are none for your gpu. Your gpu's code is : GFX7 (gfx702) . So far we have (for various rocm versions from 5.7.1 to 6.2.x) 803,900,902,906,1010,1011,1012,1030,1031,1032,1035,1100,1101,1102,1103,1150. The rest are either too old , yours is from 7th as you can see , nothing from there. Or too new that they don't need libraries to run rocm. The only option you have is directml.

@40476
Copy link
Contributor Author

40476 commented Feb 3, 2025 via email

@40476
Copy link
Contributor Author

40476 commented Feb 4, 2025

turns out directml is not supported on linux unless wsl, so does that mean i am just out of luck?

@patientx
Copy link

patientx commented Feb 4, 2025

turns out directml is not supported on linux unless wsl, so does that mean i am just out of luck?

windows :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

2 participants