Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG in the code #2

Open
panmianzhi opened this issue Jun 8, 2024 · 1 comment
Open

BUG in the code #2

panmianzhi opened this issue Jun 8, 2024 · 1 comment

Comments

@panmianzhi
Copy link

Thanks for your solid work! I think there is a potential bug in the implementation of ConR. For one anchor, if there is no positive samples in the batch, then denorm (as follows) will be zero, which consequently causes division by zero and the loss will be nan.

  # Loss = sum over all samples in the batch (sum over (positive dot product/(negative dot product+positive dot product)))
  denom = pos_i.sum(1)
@mahsakeramati
Copy link
Collaborator

@panmianzhi Hi, Thank you very much for your interest in our work. In this implementation, there always would be at least one positive pair. The reason is that for each training sample, we create 2 augmented samples, one is considered as a potential anchor and the other one is considered as a positive pair.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants