-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the pixelwise loss never decreased during training #56
Comments
If you use a smaller batchsize, the performance will drop a lot. Besides, we further discover a new kind of distillation loss, which is more useful. Please refer to this work: https://arxiv.org/abs/2011.13256. |
Thank you for your reply! Do you actually mean this paper: Channel-wise Distillation for Semantic Segmentation? |
https://arxiv.org/abs/2011.13256 |
Hello,that's an amazing work! In the training phase, how did you implement the CWD Module? Could you please release the train codes? Thank you very much! |
Hello! I only changed the batchsize from 8 to 4 becaouse the limit of my gpu memory. Then I came across NAN like issue 28.
And I have tried git checkout d1ec858, but the pixelwise loss never decreased.
Can you please tell me how to deal with this problem? Thank you so much.
The text was updated successfully, but these errors were encountered: