You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why is it that when working on semantic segmentation, I constantly encounter out of memory errors, even though I have two GPUs with 15GB each? Is it possible to distribute the model workload across the GPUs in parallel?
The text was updated successfully, but these errors were encountered:
The SAM itself is not heavy. But semantic segment anything requires four large model which is very memory consuming. At now, simply use --semantic_segment_device as 'CPU' to run. We are working on make this model lightweight now.
Why is it that when working on semantic segmentation, I constantly encounter out of memory errors, even though I have two GPUs with 15GB each? Is it possible to distribute the model workload across the GPUs in parallel?
The text was updated successfully, but these errors were encountered: