-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
onnxruntime run time issues - Image Rembg #6666
Comments
I think you are installing in your global Python rather than in the python embedded that ComfyUI portable is using to run. In fact, you don't even need Python in your system. As you can see from your logs, your Pytorch version is To install packages in the python embedded, you have to open terminal inside python_embeded folder and start the command with
Here's if you want to upgrade your Pytorch to the latest stable version and cu126:
|
Wow, I definitely was installing to global yup. still didnt fix it unfortunately. but that's a start! had no idea it was like emulating python in the folder essentially. that does give better dependency traces hah. Seems odd to me though, checking those i would think it's looking for older versions of the cuda files? from like Cuda 8.x and Cuda 11.x vs having cuda 12 on there.
|
What are you tring to do? I've never seen that "Dependency Walker" before. Btw, if you want to see all the packages installed in the python embedded, you can open terminal in python_embeded folder and run |
At this point I would try another RemBG node, I like to use these nodes from ComfyUI_essentials There is also this configuration: |
try ben2 , very good background remover. |
Your question
Could use help figuring what im missing, been spinning gears for the last couple hours reading the docs and python.
Can't seem to figure out what im missing in the CUDA, CUDNN, Tensor trifecta install.
to my understanding of docs should have the right version numbers atleast..
ONNX Runtime | CUDA | cuDNN | Notes
1.20.x | 12.x | 9.x | Avaiable in PyPI. Compatible with PyTorch >= 2.4.0 for CUDA 12.x.
onnx - version = "1.20.1"
CUDA 12.8
CUDNN 9.7
Torch Version: 2.6.0+cu126
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\CUDNN\v9.7\bin\12.8
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\TensorRT-10.8.0.43
C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
Logs
Other
No response
The text was updated successfully, but these errors were encountered: