Replies: 3 comments 2 replies
-
Please share logs from the server.. this is more to understand the exact problem.. is it GPU... is it 16 GB memory.. is it due to image size.. you get the best throughput when you have large GPU memory to fit your input image. If not, then you switch some of the pre-processing transforms to run on CPU instead of GPU |
Beta Was this translation helpful? Give feedback.
-
For example, you can remove this transform to run in CPU instead of GPU i don't know which model you are running.. so if you can provide more the details and logs, others can help you to debug |
Beta Was this translation helpful? Give feedback.
-
I'm using the default segmentation model in the radiology app. Using PYTHONPATH=/usr: [2023-09-05 17:35:31,828] [542] [MainThread] [INFO] (main:285) - USING:: version = False |
Beta Was this translation helpful? Give feedback.
-
I'm currently running 3D slicer in a Windows ec2 environment, with a monailabel server running in a docker container on a p3.8xlarge ec2 instance. I'm using the pretrained "segmentation" model, and hitting CUDA out of memory errors when selecting "Next Sample" to bring up my first sample. The p3.8xlarge has 4xGPUs, each with 16 GiB, but it sounds like multi-GPU doesn't support inference and "Next Sample" is attempting inference... so I'm essentially running with 16GiB anyway. This is cardiac imaging, so high slice number and 512x512. I'm assuming this might work with 16GiB if I had smaller volumes?
Just wondering if there are specific GPU memory requirements for AIAA. Is this behavior expected when running on a gpu of this size? Is there any workaround?
Beta Was this translation helpful? Give feedback.
All reactions