-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seeds for Reproducing Results #11
Comments
Hi @fzohra, Thank you for showing interest in our work! Regarding your queries, please note the following:
I hope this is helpful for you. Please let us know if you have any additional questions! |
Thanks for the reply! I'm using the splits defined in datasets_splits/base2novel_splits/ssv2_splits. Using the provided pretained weights (seed1, seed2, and seed3), I am able to reproduce the results doing inference on the three base and novel classes. On the base classes the average is Acc@1 16.148 Acc@5 39.474. On the novel classes, the result is Acc@1 12.337 Acc@5 31.536. However, when I train the model, the results are lower. As you can see in the following table, the accuracy on the base validation sets for each split after 11 epochs of training is 14.356, which is less than the 16.148. I don't have the numbers anymore for when I was reproducing on an a100's but it was a similar scenario. Would really like to reproduce the baseline to be close to the pre-trained weights.
|
Hi,
I am trying to reproduce the reported results when training on 4 A100 GPUs. I am using the provided configurations for all evaluations (fully supervised, zero-shot, few-shot, and base-to-novel), but the accuracy tends to fall by 1-2%.
Can you please share the seeds used obtained the reported results.
Also, what are the hardware specs used for training the results reported in the paper?
Thank you!
The text was updated successfully, but these errors were encountered: