Getting "The size of the proposed random crop ROI is larger than the image size" #6164
Replies: 4 comments 1 reply
-
Hi @AceMcAwesome77, you could set MONAI/monai/transforms/croppad/array.py Lines 1008 to 1010 in a2ec375 Hope it can help you, thanks! |
Beta Was this translation helpful? Give feedback.
-
Thanks for the reply - if I do add allow_smaller=True, my training works partially through an epoch but then throws this error: Training (58 / 50000 Steps) (loss=0.66327): 44%| | 59/134 [03:08<04:00, 3.20s/it] The error does not occur when I don't set the allow_smaller flag. Do you know what could be causing this? Thanks! |
Beta Was this translation helpful? Give feedback.
-
This worked! Thank you for the advice. In case anyone comes across this thread, my problems were downstream of not setting the channel dimension to 'no_channel' properly in the train_transforms (my nifti files did not have a channel dimension). I will attach my working code here as an example. Two things to note: be sure the transform functions are in the correct order, and make sure to use the transform functions that end in "d" rather than their original counterparts! train_transforms = Compose( |
Beta Was this translation helpful? Give feedback.
-
I deal this error is add code in './monai/transforms/croppad/array.py' at 1086 |
Beta Was this translation helpful? Give feedback.
-
Hi, I am training on a dataset which may have some nifti volumes with less than 96 axial slices. I am using a (96, 96, 96) img_size as suggested in the example dataset. During training, I occasionally and repeatedly get this error:
Training (1094 / 50000 Steps) (loss=0.54363): 4%|██████████▎ | 4/108 [00:12<05:16, 3.04s/it]Exception in thread Thread-51:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/transform.py", line 102, in apply_transform
return _apply_transform(transform, data, unpack_items)
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/transform.py", line 66, in _apply_transform
return transform(parameters)
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/croppad/dictionary.py", line 861, in call
self.randomize(label, fg_indices, bg_indices, image)
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/croppad/dictionary.py", line 852, in randomize
self.cropper.randomize(label=label, fg_indices=fg_indices, bg_indices=bg_indices, image=image)
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/croppad/array.py", line 1064, in randomize
self.centers = generate_pos_neg_label_crop_centers(
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/utils.py", line 520, in generate_pos_neg_label_crop_centers
centers.append(correct_crop_centers(center, spatial_size, label_spatial_shape, allow_smaller))
File "/opt/conda/lib/python3.8/site-packages/monai/transforms/utils.py", line 447, in correct_crop_centers
raise ValueError("The size of the proposed random crop ROI is larger than the image size.")
ValueError: The size of the proposed random crop ROI is larger than the image size.
But the training continues on depsite the error! I am guessing this error arises when one of the training nifti volumes does not have 96 slices. I know that the following is true of the sliding_window_inference function, according to the documentation: "When roi_size is larger than the inputs’ spatial size, the input image are padded during inference." Shouldn't the training data also just get padded in the same way to avoid errors? At least, shouldn't this just throw a warning rather than an error, if the training is going to continue? It seems like zero padding training data that is too small is not a bad thing, especially considering the same thing must occur during validation/testing.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions