We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello.
Python 3.12.7
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024 Cuda compilation tools, release 12.6, V12.6.20 Build cuda_12.6.r12.6/compiler.34431801_0
torch-2.6.0+cu126-cp312-cp312-win_amd64
Suffering third day of trying to find\install proper bitsandbytes version to train it on fluxgym. After launching, I'm getting this Warning:
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.
Could you support please, what should I do to run it locally with the success? tnx a lot in advance.
The text was updated successfully, but these errors were encountered:
I also encountered the same problem.
Sorry, something went wrong.
Hi, Which version of bitsandbytes have you tried? For CUDA 12.6, the minimum is 0.45.0.
0.45.0
matthewdouglas
No branches or pull requests
Hello.
Python 3.12.7
torch-2.6.0+cu126-cp312-cp312-win_amd64
Suffering third day of trying to find\install proper bitsandbytes version to train it on fluxgym. After launching, I'm getting this Warning:
Could you support please, what should I do to run it locally with the success? tnx a lot in advance.
The text was updated successfully, but these errors were encountered: