Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the value of loss_w #7

Open
azuma164 opened this issue Jun 17, 2023 · 2 comments
Open

About the value of loss_w #7

azuma164 opened this issue Jun 17, 2023 · 2 comments

Comments

@azuma164
Copy link

Dear author.
I found your work so interesting.
Then, I have a question about training settings to reproduce the results in your paper.
Can you share each value of loss_w related to pp, respectively?
I really appreciate any help you can provide.

@steb6
Copy link

steb6 commented Jan 17, 2024

Hi, did you solve this issue? I just tried with loss_w=0.03 with pp=0.5 on VOC2007 and I got about 93.2 (close enough to 93.6)

@jianbohuang
Copy link

hi, I run this code to reproduce the zero shot result in coco.
The F1-score of 46.35 at top_3 is significantly lower than the 50.3 reported in your paper.

train_zsl.py --config_file configs/models/rn50_ep50.yaml --datadir /PATH_COCO/ --dataset_config_file configs/datasets/coco.yaml --input_size 224 --lr 0.002 --loss_w 0.01 --n_ctx_pos 64 --n_ctx_neg 64

The best result for the unseen ZSL was:

Test: [5/50] p_unseen 32.50 r_unseen 80.76 f_unseen 46.35 mAP_unseen 64.03

Can you share your full implementation args for result in Table.4?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants