-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA runs out of memory with lots of memory reserved #8
Comments
Having the same problem |
The same model (latent diffusion 1.6B) does run on 8 GB when using Jack000/glid-3-xl, so it is supposed to work. |
I'm having the same problem it allocates tons of memory and then fails |
Also runs out of VRAM on a 16 GB P100, so something is definitely wrong. This did not happen with the same model on the latent-diffusion repo |
It appears I did have the problem on the latent-diffusion repo as well, but I fixed my problem on both by adding |
where exactly should I put it? I tried pasting it in and still got the error granted I have less gpu memory |
What GPU are you using? Maybe it doesn't have enough VRAM |
entirely possible it's a 6gb GTX 1060. not horrible but not new or anything |
6 GB isn't enough for the large latent diffusion model (which isn't the actual stable diffusion model) . It might be enough for the stable diffusion model once it releases which is half the size. |
I'm still getting the error, even with PYTORCH_CUDA_ALLOC_CONF set.
GeForce RTX 3060, 12GB of VRAM |
I solved this by setting --n_samples to 1 and using --n_iter if I wanted more than one output. |
I'm trying to run the text-to-image model with the example but CUDA keeps running out of memory, despite it barely trying to allocate anything. It's trying to allocate 20MB when there's 7.3GB reserved. Is there any way to fix this? I've searched all over but I couldn't find a clear answer.
The text was updated successfully, but these errors were encountered: