Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A tiny bug during training #46

Open
ZhanyuGuo opened this issue Mar 9, 2022 · 0 comments
Open

A tiny bug during training #46

ZhanyuGuo opened this issue Mar 9, 2022 · 0 comments

Comments

@ZhanyuGuo
Copy link

Hi, thanks for your great work!

I find a tiny bug which will influence the decay of learning rate: In the function train() in train.py, when epoch reaches freeze_teacher_epoch, it will reset the optimizer and lr_scheduler, which makes epoch reset to 0 in lr_scheduler's view.

I have proved that the lr will never decay in normal training, because step_size=15 and when epoch == 15, lr_scheduler is reset.

I fixed it and trained a new model under the same condition, getting the following results,

abs_rel sq_rel rmse rmse_log a1 a2 a3
KITTI_MR 0.098 0.770 4.459 0.176 0.900 0.965 0.983
NEW 0.100 0.755 4.423 0.178 0.899 0.964 0.983

It seems better in sq_rel and rmse.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant