You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi Mephisto,
I have a concern need to be discussed with you.
As title, I don't know which is better on different setting.
If we just to add the weak img by confidence of pure network for training in every new cycle, maybe we can get same as or better than this paper experiment result.
Have you tried this setting of experiment?
Best regards.
William
The text was updated successfully, but these errors were encountered:
Sorry for confusing you. Because the aim of "Learning loss" is according to the loss of the loss module to filter the weak feature of image for currently model, then add that for image of weak feature of this model in each lift cycle. So the principle may same as directly using the confidence behind Softmax. I don't know which is better and what's different with this experiment. Have you once tried the experiment by "confidence behind softmax" ?
Aha, I understand now. Unfortunately, I have not tested that kind of approach. Not sure, but I think you can find some papers that use the approach since it is straightforward.
Hi Mephisto,
I have a concern need to be discussed with you.
As title, I don't know which is better on different setting.
If we just to add the weak img by confidence of pure network for training in every new cycle, maybe we can get same as or better than this paper experiment result.
Have you tried this setting of experiment?
Best regards.
William
The text was updated successfully, but these errors were encountered: