Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

libbitsandbytes_cuda128.dll file does not exist #1510

Open
tripleS-Dev opened this issue Feb 11, 2025 · 3 comments
Open

libbitsandbytes_cuda128.dll file does not exist #1510

tripleS-Dev opened this issue Feb 11, 2025 · 3 comments
Labels

Comments

@tripleS-Dev
Copy link

System Info

The libbitsandbytes_cuda128.dll file does not exist on my RTX5090 Windows 11 system.

(venv) A:\ai\comfy_me\ComfyUI>python -m bitsandbytes
Could not find the bitsandbytes CUDA binary at WindowsPath('A:/ai/comfy_me/ComfyUI/venv/lib/site-packages/bitsandbytes/libbitsandbytes_cuda128.dll')
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++
CUDA specs: CUDASpecs(highest_compute_capability=(12, 0), cuda_version_string='128', cuda_version_tuple=(12, 8))
PyTorch settings found: CUDA_VERSION=128, Highest Compute Capability: (12, 0).
Library not found: A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cuda128.dll. Maybe you need to compile it from source?
If you compiled from source, try again with `make CUDA_VERSION=DETECTED_CUDA_VERSION`,
for example, `make CUDA_VERSION=113`.

The CUDA version for the compile might depend on your conda install, if using conda.
Inspect CUDA version via `conda list | grep cuda`.
To manually override the PyTorch CUDA version please see: https://github.com/TimDettmers/bitsandbytes/blob/main/docs/source/nonpytorchcuda.mdx
The directory listed in your path is found to be non-existent: C:\Users\hj6ch\.rustup\toolchains\esp\xtensa-esp32-elf-clang\esp-clang\bin\libclang.dll
The directory listed in your path is found to be non-existent: \HJ
The directory listed in your path is found to be non-existent: C:\Program Files\JetBrains\PyCharm 2024.1.6\bin
The directory listed in your path is found to be non-existent: C:\Users\hj6ch\.rustup\toolchains\esp\xtensa-esp32-elf-clang\esp-clang\bin
The directory listed in your path is found to be non-existent: C:\Users\hj6ch\.rustup\toolchains\esp\xtensa-esp-elf\bin
The directory listed in your path is found to be non-existent: C:\Program Files\JetBrains\PyCharm 2024.1.6\bin
Found duplicate CUDA runtime files (see below).

We select the PyTorch default CUDA runtime, which is 12.8,
but this might mismatch with the CUDA version that is needed for bitsandbytes.
To override this behavior set the `BNB_CUDA_VERSION=<version string, e.g. 122>` environmental variable.

For example, if you want to use the CUDA version 122,
    BNB_CUDA_VERSION=122 python ...

OR set the environmental variable in your .bashrc:
    export BNB_CUDA_VERSION=122

In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g.
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2,
* Found CUDA runtime at: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\cudart64_12.dll
* Found CUDA runtime at: C:\WINDOWS\system32\nvcuda.dll
* Found CUDA runtime at: C:\WINDOWS\system32\nvcudadebugger.dll
* Found CUDA runtime at: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_65.dll
* Found CUDA runtime at: C:\WINDOWS\system32\nvcuda.dll
* Found CUDA runtime at: C:\WINDOWS\system32\nvcudadebugger.dll
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking that the library is importable and CUDA is callable...
Traceback (most recent call last):
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\diagnostics\main.py", line 66, in main
    sanity_check()
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\diagnostics\main.py", line 40, in sanity_check
    adam.step()
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\torch\optim\optimizer.py", line 494, in wrapper
    out = func(*args, **kwargs)
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 291, in step
    self.update_step(group, p, gindex, pindex)
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 521, in update_step
    F.optimizer_update_32bit(
  File "A:\ai\comfy_me\ComfyUI\venv\lib\site-packages\bitsandbytes\functional.py", line 1571, in optimizer_update_32bit
    optim_func = str2optimizer32bit[optimizer_name][0]
NameError: name 'str2optimizer32bit' is not defined
Above we output some debug information.
Please provide this info when creating an issue via https://github.com/TimDettmers/bitsandbytes/issues/new/choose
WARNING: Please be sure to sanitize sensitive info from the output before posting it.

Reproduction

python -m bitsandbytes

Expected behavior

Could not find the bitsandbytes CUDA binary at WindowsPath('A:/ai/comfy_me/ComfyUI/venv/lib/site-packages/bitsandbytes/libbitsandbytes_cuda128.dll')

@matthewdouglas
Copy link
Member

Hi,

We have not released CUDA 12.8 builds yet, but that will be coming soon in an upcoming release. In the meantime, here are some options:

#1. You should still be able to run with an environment variable set to select the 12.6 build: BNB_CUDA_VERSION=126.
#2. You could build bitsandbytes from source with the CUDA 12.8 toolkit.

@truaswild
Copy link

any news?

@Tayyab-H
Copy link

Question, if i use the 12.6 build will that work on my 50 series card?

Im running bitsandbytes for quantization and im getting the following error:

The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.

I just "pip install -U bitsandbytes" to install it. Is there a special install for gpu support?

Any help would be greatly appreciated :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants