You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've read your Leba work and I'm very interested in it. Recently, I'm trying to reproduce the part that uses forward loss and backward loss to narrow the gap between the two models. However, I feel confused about the code. I hope you can answer it. Thank you very much
I imitate the code in Git and try to calculate the backward loss as follows:
However, based on the above implementation, the gap between the output of the surrogate model and the target model gradually widens uncontrollably. I tested the gap via torch.nn.MSELoss(reduce=True, size_average=False)(target_model(images), surrogate_model(images)).
Whats's more, I imitate the code in Git and try to calculate the forward loss as follows:
Hi, Yang:
I've read your Leba work and I'm very interested in it. Recently, I'm trying to reproduce the part that uses forward loss and backward loss to narrow the gap between the two models. However, I feel confused about the code. I hope you can answer it. Thank you very much
I imitate the code in Git and try to calculate the backward loss as follows:
However, based on the above implementation, the gap between the output of the surrogate model and the target model gradually widens uncontrollably. I tested the gap via
torch.nn.MSELoss(reduce=True, size_average=False)(target_model(images), surrogate_model(images))
.Whats's more, I imitate the code in Git and try to calculate the forward loss as follows:
And using forward loss the gap does not show a monotonous downward trend.
I would like to ask which part of my understanding is wrong. :(
Yours.
Fei
The text was updated successfully, but these errors were encountered: