Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are the results' variance large? #13

Open
yxchng opened this issue Nov 9, 2023 · 1 comment
Open

Are the results' variance large? #13

yxchng opened this issue Nov 9, 2023 · 1 comment

Comments

@yxchng
Copy link

yxchng commented Nov 9, 2023

                          |        A-150       | A-847 | PC-59 | PC-459 | PAS-21 | PAS-20 |       COCO         |
                          |  PQ  | mAP  | mIoU | mIoU  | mIoU  |  mIoU  |  mIoU  |  mIoU  |  PQ  | mAP  | mIoU |
fc-clip large             | 26.8 | 16.8 | 34.1 | 14.8  | 58.4  |  18.2  |  81.8  |  95.4  | 54.4 | 44.6 | 63.7 |
fc-clip large (reproduce) | 25.3 | 16.2 | 32.8 | 14.2  | 56.7  |  17.5  |  82.7  |  95.5  | 56.5 | 47.6 | 65.0 |

I tried retraining the ConvNeXt-Large model and the performance is quite a bit lower than the published results. Are the results' variance large, such that I have to rerun it a few times?

@cornettoyu
Copy link
Collaborator

Hi,

You can refer to the training log here

We kept the best checkpoint in terms of ADE20K PQ metric, as we also note that the last checkpoint usually tends to "overfit" to the COCO dataset and shows worse generalization to other datasets.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants