-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem of evaluating ReCo on CityScapes #6
Comments
Hi @xs1317 and thank you for your kind words and interest in our work. I'm having difficulties understanding your question as it seems you got the same results as the ones reported in the paper after rounding, i.e., 74.6 and 19.3 for pixel accuracy and mIoU (in percent). Could you please double check if your question is correct or if by any chacne you got the numbers confused with another dataset such as KITTI-STEP? Kind regards, |
Sorry, I was too quick to reply to you earlier - I noticed that the numbers you compared to are the ones in the revised version rather than the arXiv version. The difference is: in the new version, ReCo was evaluated at original resolutions using the inference code of Drive&Segment (https://github.com/vobecant/DriveAndSegment) for a fair comparison, whereas ReCo+ is evaluated at 320×320 pixels as in STEGO (https://arxiv.org/abs/2203.08414). As the image pre-processing used in this repo is for the latter case, there are some differences as you noted. Noel |
Hi @noelshin ,thanks for your reply. |
Thank you very much for letting me know - I could confirm that the embedding file produces the numbers you got. To double check the code, I re-implemented to compute the image embeddings for KITTI-STEP and, in that case, the correct numbers could be obtained as reported in our paper. (I think I mistakenly upload a different embedding file with some other setting that I got during ablation studies.) Please try to download the embedding file again and verify if you could get the exact numbers. If not, please let me know again. :) |
Hi @noelshin ,thanks for your help. |
Hello, thanks for sharing your fantastic work.
I evaluated the ReCo on Cityscapes validation split but didn't get the same result as your paper.Here are my result and configs:
Results: deit_s_16_sin_in_train_ce_ta_dc/results_crf.json
"Pixel Acc": 0.7456942998704943, "Mean IoU": 0.19347486644280978
I get better Pixel Acc and worse mIOU.
configs:
I directly download the reference image embedding for CityScapes and the pre-trained models.
The problem should not be caused by image preprocessing because I got the same result when evaluating ReCo+ on CityScapes.
Would you help me solve this problem? Thanks!
The text was updated successfully, but these errors were encountered: