Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

CUDA out of memory #135

Open
Ammara-Ihsan opened this issue Sep 15, 2022 · 1 comment
Open

CUDA out of memory #135

Ammara-Ihsan opened this issue Sep 15, 2022 · 1 comment

Comments

@Ammara-Ihsan
Copy link

Hi, I'm new at this platform, can anyone pls help me how to solve this error, I tried to reduce epochs and batch_size, and I also tried to reduce the number of workers but nothing helped in this,
Thanks,

RuntimeError: CUDA out of memory. Tried to allocate 752.00 MiB (GPU 0; 14.76 GiB total capacity; 9.84 GiB already allocated; 53.94 MiB free; 10.03 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@cgvalle
Copy link

cgvalle commented Sep 15, 2022

Seems like your v-ram is been use for another process (9.84 GiB used something else). Use nvidia-smi to check which process are using the gpu

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants