Replies: 2 comments 3 replies
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi @Moron9645 ,
Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi, |
Beta Was this translation helpful? Give feedback.
-
Hi @Moron9645 ,
Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi everyone,
I am currently trying to use the 3D U-net implemented by MONAI to segment liver out of a series of CT images.
After several trials I have got some result from the model. However, I am still confused about how to set the parameters properly in RandCropByPosNegLabeld (spatial_size, num_samples) and sliding_window_inference (roi_size, sw_batch_size).
As far as I am concerned, to make inputs possess the same shape, RandCropByPosNegLabeld randomly cuts the image into spatial_size size crops, num_samples would be the numbers of the crops. Should I set spatial_size according to the size of liver ?
For example, if the size of the liver is (250,200,150), should I set the spatial_size like (256,256,256) or (128,128,128) ? Of course I have tried directly with (250,200,150), but I got an error about tensor size dismatch then, why should I get this error ?
Also, as far as I am concerned, the sliding_window_inference make predictions with a sliding window going through all the images and predict the results, roi_size being the size of the windows, and sw_batch_size being the number of the windows (when I set sw_batch_size to 4, will there be 4 windows slidng at the same time ?). Then also the same problem, should I set spatial_size according to the size of liver and should all the three axis lengths be the same ?
Here I have listed the shapes, spacings, and liver sizes of all the images, which size should be proper to adapt to the spatial_size and roi_size ?
Thanks a lot in advance !!
Beta Was this translation helpful? Give feedback.
All reactions