Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The error samples are due to issues with the ground truth annotations rather than errors in the model predictions. #24

Open
libenchong opened this issue Jul 3, 2023 · 6 comments

Comments

@libenchong
Copy link

Hello@amaralibey, I have selected the failure samples from the MSLS validation set, pitts30k test set and pitts250 test set where the recall1 of the mixvpr model failed. I found that for a large part of these error samples, the recall1 images given by the model are actually correct, but they are not in the ground truths. In other words, the cause of these error samples is the problem of the ground truths, not the errors of model prediction, which comparing image similarity cannot solve.

@amaralibey
Copy link
Owner

Hello @libenchong,

Thank you again for your valuable feedback. Yes, I've noticed some errors in that regard but never did a thorough investigation like you did. That would make a great study case I think.

Best regards,

@libenchong
Copy link
Author

Hello@amaralibey. In these error cases, the images predicted by the model to be most similar to the query are not in the ground truth, or the geographical distance between the predicted images and the query is slightly larger than the ground truth. I'm not sure how to handle these cases, do you have any good suggestions?
this is error sample.I think the images predicted by the model are more accurate than the ground truth
image

@amaralibey
Copy link
Owner

Hello @libenchong,

Great observation!

In most benchmarks, the ground truth (GT) typically includes images located within a 25-meter perimeter from the query, based on GPS coordinates. However, since Mapillary images are primarily crowdsourced from various sources like phones and dashboard cameras, mapillary_sls often exhibit a considerable amount of noise, as demonstrated in the example you provided.

Although the model's prediction appears to be close to the query (judging by the door on the right), the noisy coordinates associated with the image indicate a distance greater than 25 meters, resulting in an inaccurate label.

I would propose manual verification when creating a benchmark, specifically for positive matches within the 25-40 meter range from the query, particularly when we are relying solely GPS coordinates that are prone to a lot of noise. This way, we can ensure a more precise evaluation of the model's performance.

Additionally, it's worth noting that these errors in the labels apply to all evaluated techniques and might affect their performance in a similar manner. This helps maintain fairness in the evaluation process.

Again, thank you for your valuable observations.

@huachaoguang
Copy link

Hello@amaralibey, I have selected the failure samples from the MSLS validation set, pitts30k test set and pitts250 test set where the recall1 of the mixvpr model failed. I found that for a large part of these error samples, the recall1 images given by the model are actually correct, but they are not in the ground truths. In other words, the cause of these error samples is the problem of the ground truths, not the errors of model prediction, which comparing image similarity cannot solve.

hollow,@libenchong. i'm retry this project. But Ican't find the GT about pitts30K. Can you give me a link to download the GT about pitts30k. Thank you very much!
have a good day.
xinrui xie
屏幕截图 2024-08-09 224918
屏幕截图 2024-08-09 224956

@libenchong
Copy link
Author

Hello @huachaoguang, you can refer to another project by the MixVPR author called gsv-cities, where you can find the GT files you need in the datasets folder. Additionally, several better-performing VPR methods have been published at this year's CVPR, including another paper by the MixVPR author titled Bag-of-Queries.

@huachaoguang
Copy link

Hello @huachaoguang, you can refer to another project by the MixVPR author called gsv-cities, where you can find the GT files you need in the datasets folder. Additionally, several better-performing VPR methods have been published at this year's CVPR, including another paper by the MixVPR author titled Bag-of-Queries.

hallo,@libenchong, thank you for you reply. Last day I find the the way to resolve the error. I also want to retry the new method like Bag-of-queries and anyloc.
Have a good day!
huachaoguang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants