Replies: 1 comment 3 replies
-
Hi, the text in the screenshot that you've attached is suggested two different approaches: one for training and another for validation/testing/inference. If you have an image that is, for example When it comes to validation/testing/inference, you want put the whole image through your network. However, you can do this bit-by-bit by using The 3d spleen segmentation notebook gives examples of both of these functionalities: https://github.com/Project-MONAI/tutorials/blob/master/3d_segmentation/spleen_segmentation_3d.ipynb. I hope this answers your question as to how to be sure that all of your image will be passed through the model at inference time. Let me know if you want any further clarification. |
Beta Was this translation helpful? Give feedback.
-
Hello, May I ask if there is any paper or material that can support the "Medical image data volume may be too large to fit into GPU memory. A widely-used approach is to randomly draw small size data samples during training and run a ‘sliding window’ routine for inference." mentioned in "4. Randomly crop out batch images based on positive/negative ratio"
For example, if I use the RandCropByPosNegLabel transform, How can I make sure that every place containing the target area is collected? If not, does that mean I can't make the best use of my data?
Beta Was this translation helpful? Give feedback.
All reactions