Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does DELF / DELG feature size & number impact recognition on Google Landmarks ? #10668

Open
MSoumm opened this issue Jun 11, 2022 · 1 comment
Labels
models:research models that come under research directory type:support

Comments

@MSoumm
Copy link

MSoumm commented Jun 11, 2022

Hi,

I am using DELG pre-trained on Google Landmarks v2-clean to perform landmark recognition on a large homemade dataset containing millions of images.
I adapted the extract_features.py script to generate features from the GLv2-clean dataset (instead of the given Oxford/Paris code), but I am a bit concerned about the local feature size for each image (around 500 KB per sample) since the features for the whole GLv2 dataset would take up to 1TB of space, which is more than the original dataset size.

So far, I have left the default configuration provided unchanged.
Has anyone experimented with the use_pca parameter of the config file, and how does it affect recognition/retrieval performance? I guess you could also tune the score threshold or the max number of features per image, but again, I am afraid of significatively lowering performance.

Thank you very much in advance

@frederick0329
Copy link
Contributor

Can you share which repo or code you are using so we can try our best to route it to the right person?

@jvishnuvardhan jvishnuvardhan added the models:research models that come under research directory label Jul 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models:research models that come under research directory type:support
Projects
None yet
Development

No branches or pull requests

3 participants