You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I find a tiny bug which will influence the decay of learning rate: In the function train() in train.py, when epoch reaches freeze_teacher_epoch, it will reset the optimizer and lr_scheduler, which makes epoch reset to 0 in lr_scheduler's view.
I have proved that the lr will never decay in normal training, because step_size=15 and when epoch == 15, lr_scheduler is reset.
I fixed it and trained a new model under the same condition, getting the following results,
abs_rel
sq_rel
rmse
rmse_log
a1
a2
a3
KITTI_MR
0.098
0.770
4.459
0.176
0.900
0.965
0.983
NEW
0.100
0.755
4.423
0.178
0.899
0.964
0.983
It seems better in sq_rel and rmse.
The text was updated successfully, but these errors were encountered:
Hi, thanks for your great work!
I find a tiny bug which will influence the decay of learning rate: In the function
train()
intrain.py
, whenepoch
reachesfreeze_teacher_epoch
, it will reset the optimizer and lr_scheduler, which makesepoch
reset to 0 in lr_scheduler's view.I have proved that the
lr
will never decay in normal training, becausestep_size=15
and whenepoch == 15
, lr_scheduler is reset.I fixed it and trained a new model under the same condition, getting the following results,
It seems better in
sq_rel
andrmse
.The text was updated successfully, but these errors were encountered: