-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproduce Synthetic Experiment (paper sec. 4.1) #5
Comments
I have the same issue. I can‘t repeat the synthetic experiments when I follow the step and parameters as mentioned in the paper. I don't know whether there are some other hyperparameters to define. |
Hi and thanks for your interest! Sorry, I missed that issue. I haven't got around to clean up and provide the synthetic experiments yet. However, this is the code we used for applying Sinkhorn on the initial and refined assignment matrix: def sinkhorn(x, x_mask=None, num_steps=0):
if x_mask is not None:
x = x.masked_fill(~x_mask, float('-inf'))
x = torch.softmax(x, dim=-1)
if x_mask is not None:
x = x.masked_fill(~x_mask, 0)
for _ in range(num_steps):
x = F.normalize(x, p=1, dim=-2)
x = F.normalize(x, p=1, dim=-1)
if x_mask is not None:
x = x.masked_fill(~x_mask, 0)
return x I will let you know once synthetic experiments are included in the repository. |
Hi, I have a follow up question: I saw that your |
We run sinkhorn for a very large number of fixed iterations. |
Can you give me an order of magnitude if you still have it? Exact number would be better. Thanks! |
We run sinkhorn both for 100 and 1000 iterations and they both perform equally well. |
Hi,
Thanks for the amazing paper and code! This is not really an issue but I wonder if the authors can share instructions or code on how to reproduce the synthetic experiments section 4.1 (with Sinkhorn iterations). Any help or pointer are appreciated! Thanks!
The text was updated successfully, but these errors were encountered: