Replies: 5 comments 1 reply
-
It is intended for PyTorch 0.4.1, where the call of |
Beta Was this translation helpful? Give feedback.
-
Oh I got it Thank you. But I run this framework in my pytorch1.2 env and it dosen't give any warnings, so I am so curious. Fine, what's explanation about question 2? |
Beta Was this translation helpful? Give feedback.
-
Please refer to PyTorch documentation for detailed explanations:
|
Beta Was this translation helpful? Give feedback.
-
That's why I ask it.
Thank you! |
Beta Was this translation helpful? Give feedback.
-
Oh I also find a code logic bug: trainable_params = []
trainable_params += [{'params': filter(lambda x: x.requires_grad,
model.backbone.parameters()),
'lr': cfg.BACKBONE.LAYERS_LR * cfg.TRAIN.BASE_LR}]
trainable_params += [{'params': model.rpn_head.parameters(),
'lr': cfg.TRAIN.BASE_LR}]
optimizer = torch.optim.SGD(trainable_params,
momentum=cfg.TRAIN.MOMENTUM,
weight_decay=cfg.TRAIN.WEIGHT_DECAY) In aboved these codes, after I analyse the |
Beta Was this translation helpful? Give feedback.
-
about lr_scheduler, i have two questions:
lr_scheduler.step(epoch)
write in line194 beforeoptimizer.step()
in line223, it seems it disobey pytorch rule about writing schedule after optimizer.step(), but it didn't throw error, why?cfg.BACKBONE.TRAIN_EPOCH == epoch
, I know it should begin training backbone, but I don't know why now setcfg.TRAIN.START_EPOCH
as last_epoch, this will lead lr decay largely whenepoch=cfg.BACKBONE.TRAIN_EPOCH +1
?Beta Was this translation helpful? Give feedback.
All reactions