-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Getting stuck at validation step #4
Comments
Can you set the I have never seen this error |
@cndu234 |
|
@kingjames1155 Yes, as @hoangtan96dl also mentioned you should either set |
Epoch 0: 0%| | 0/31 [00:00<?, ?it/s]The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details. |
I found that the problem was Sliding Window Inference,, which caused me to get stuck in validation_step and predict_step,I haven't changed any parameters about Sliding Window Inference,Have you ever encountered such a situation? |
No, I didn't have such a problem. I had a problem with the number of outputs. I had 4 labels while the number of outputs was set to 2. However, it wasn't throwing a validation step error. What patch and batch size do you use? |
I used luna16 with V100,batch size is 8, train and batch is [64,64,64],I tried to lower these parameters, but it was still stuck |
I had the same problem,How to avoid this problem? |
Epoch 0: 0%| | 0/10 [00:00<?, ?it/s]Trying to infer the batch_size from an ambiguous collection. The batch size we found is 1. To avoid any miscalculations, use self.log(..., batch_size=batch_size). The default behavior for interpolate/upsample with float scale_factor changed in 1.6.0 to align with other frameworks/libraries, and now uses scale_factor directly, instead of relying on the computed output size. If you wish to restore the old behavior, please set recompute_scale_factor=True. See the documentation of nn.Upsample for details.
Epoch 0: 70%|█████████ | 7/10 [00:14<00:06, 2.12s/it, loss=0.724, v_num=11]
Validating: 0it [00:00, ?it/s] Validating: 0%| | 0/3 [00:00<?, ?it/s]
I have been trying for larger dataset unlike the error here. But im always getting stuck at validation stage.
I tried with hippocampus dataset, the results are fine. But with my custom data, Im facing this problem. What could be the reason?
The text was updated successfully, but these errors were encountered: