Skip to content
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.

RuntimeError: CUDA out of memory. Tried to allocate 3.05 GiB (GPU 0; 23.65 GiB total capacity; 17.04 GiB already allocated; 2.81 GiB free; 19.77 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF #672

Unanswered
Tom0515Lt asked this question in Q&A
Discussion options

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant