-
Hi, I'm using UNet following the framework laid out in the Spleen tutorial. To reduce overfitting and aid generalisation I have tried adding RandAdjustContrastd and Rand3DElasticd. The validation values are similar enough between the runs, but what I'm interested in is why the training loss is now plateuing at ~0.3 rather than <0.1. I'd expect the random augmentations to slow down learning, but it doesn't look like the loss is just going down more slowly, it seems to not be improving with more epochs.
Any advice from more seasoned ML practitioners would be much apprecaited! Thanks, Jeff. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
I guess you're only applying the random transformations to the training data and not the validation? This might explain why your training loss isn't doing as well, but your dice score isn't as heavily affected. Have you visualised a selection of images after the random transformations? I've found I needed a bit of trial and error to get good magnitude and sigma values for the random elastic transformation. |
Beta Was this translation helpful? Give feedback.
I guess you're only applying the random transformations to the training data and not the validation? This might explain why your training loss isn't doing as well, but your dice score isn't as heavily affected. Have you visualised a selection of images after the random transformations? I've found I needed a bit of trial and error to get good magnitude and sigma values for the random elastic transformation.