This repository has been archived by the owner on Aug 1, 2024. It is now read-only.
RuntimeError: CUDA out of memory. Tried to allocate 3.05 GiB (GPU 0; 23.65 GiB total capacity; 17.04 GiB already allocated; 2.81 GiB free; 19.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #672
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
OOM!I am currently trying to solve an issue related to deal , but I'm facing some difficulties. I have tried several approaches, but I still can't resolve the problem.RuntimeError: CUDA out of memory.
I'm reaching out to the GitHub community for assistance, hoping that experienced members might be able to offer some advice, insights, or solutions. If you could provide any help, I would greatly appreciate it.
and how can i use 2gups?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions