You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your solid work! I think there is a potential bug in the implementation of ConR. For one anchor, if there is no positive samples in the batch, then denorm (as follows) will be zero, which consequently causes division by zero and the loss will be nan.
# Loss = sum over all samples in the batch (sum over (positive dot product/(negative dot product+positive dot product)))
denom = pos_i.sum(1)
The text was updated successfully, but these errors were encountered:
@panmianzhi Hi, Thank you very much for your interest in our work. In this implementation, there always would be at least one positive pair. The reason is that for each training sample, we create 2 augmented samples, one is considered as a potential anchor and the other one is considered as a positive pair.
Thanks for your solid work! I think there is a potential bug in the implementation of ConR. For one anchor, if there is no positive samples in the batch, then
denorm
(as follows) will be zero, which consequently causes division by zero and the loss will be nan.The text was updated successfully, but these errors were encountered: