-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the result of online adaptation with "L2AWad" #9
Comments
And the image shape of Synthia is scaled to half resolution: [380, 640] |
HI Tiam, |
Hi, In addition, a problem found during training is that the loss is very likely to suddenly become larger, as large as several thousand. Whereas the results I described earlier are in the case of normal training, i.e. the loss starts around 30 and drops to around 3 and converges. |
Hello @AlessioTonioni , |
Hi Tiam, |
@Tiam2Y I met the same problem here, the loss is very likely to suddenly become larger, finally, become NAN. How do you deal with this problem? |
Hello @peiran88 . |
@Tiam2Y Thank you for telling me that. I faced the same problem that the initial error during testing is quite large. Did you offline train the model again with a different configuration after that? Now I am pretty confused about achieving similar accuracy as the paper. |
@peiran88 In my impression, I modified the configuration and retried several times, but it was the same result. You can refer to my comments above. I think the main reason is that the disparity range of the data during offline training is quite different from that during online testing. So I think you can change to a dataset with a close disparity range for offline training (for example, you can use Carla simulator to render a new dataset as mentioned in the paper) |
Hello! Thanks for the great work! @AlessioTonioni
I'm at it again and have questions about the results of "Learning to adapt".
I used 12 Synthia video sequences as dataset and meta-trained the network you provided with the following parameters:
After training, I used this weight to test online adaptation on video sequences from DrivingStereo and KITTI raw data. I found that the prediction results for the first few frames were extremely poor , the error rate D1 is close to 99%, but after 100 to 200 frames, D1 quickly drops below 10%.
I would like to ask:
Sorry for the troublesome questions, but I'd appreciate your answers!
The text was updated successfully, but these errors were encountered: