You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the excellent work!
I got some problem
I finetune the model using own data. Howerer it got stuck in step 2 flow_predictions = model(left, right)
after one optimizer.step().clear_grad(), the network can not inference any image.
I use gdb to debug and find it would be stuck in random layers in the network forward....
I check that my data is correct. Even using same data the model got stuck after one optimizer.step().clear_grad()
Do you have any suggestions?
I upgrade mgengine 1.9.1 -> 1.11.1
the model can train without stuck.
However, it print when doing optimizer.step().clear_grad() at first time:
WRN Not FormattedTensorValue input for AttachGrad op: AttachGradValue{key=grad_1}, (49342:49342) Handle{ptr=0x5616b860dd58, name="update_block.encoder.conv.bias"}
the para update abnormal, the result are worse.
Does anyone meet the same problem or has any suggestion?
The text was updated successfully, but these errors were encountered:
Eatmelonboy
changed the title
finetune or train with own data, poor result
WRN Not FormattedTensorValue input for AttachGrad op: AttachGradValue{key=grad_1}
Nov 26, 2022
Thank you for the excellent work!
I got some problem
I finetune the model using own data. Howerer it got stuck in step 2
flow_predictions = model(left, right)
after one optimizer.step().clear_grad(), the network can not inference any image.
I use gdb to debug and find it would be stuck in random layers in the network forward....
I check that my data is correct. Even using same data the model got stuck after one optimizer.step().clear_grad()
Do you have any suggestions?
I upgrade mgengine 1.9.1 -> 1.11.1
the model can train without stuck.
However, it print when doing optimizer.step().clear_grad() at first time:
WRN Not FormattedTensorValue input for AttachGrad op: AttachGradValue{key=grad_1}, (49342:49342) Handle{ptr=0x5616b860dd58, name="update_block.encoder.conv.bias"}
the para update abnormal, the result are worse.
Does anyone meet the same problem or has any suggestion?
The text was updated successfully, but these errors were encountered: