You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using DELG pre-trained on Google Landmarks v2-clean to perform landmark recognition on a large homemade dataset containing millions of images.
I adapted the extract_features.py script to generate features from the GLv2-clean dataset (instead of the given Oxford/Paris code), but I am a bit concerned about the local feature size for each image (around 500 KB per sample) since the features for the whole GLv2 dataset would take up to 1TB of space, which is more than the original dataset size.
So far, I have left the default configuration provided unchanged.
Has anyone experimented with the use_pca parameter of the config file, and how does it affect recognition/retrieval performance? I guess you could also tune the score threshold or the max number of features per image, but again, I am afraid of significatively lowering performance.
Thank you very much in advance
The text was updated successfully, but these errors were encountered:
Hi,
I am using DELG pre-trained on Google Landmarks v2-clean to perform landmark recognition on a large homemade dataset containing millions of images.
I adapted the extract_features.py script to generate features from the GLv2-clean dataset (instead of the given Oxford/Paris code), but I am a bit concerned about the local feature size for each image (around 500 KB per sample) since the features for the whole GLv2 dataset would take up to 1TB of space, which is more than the original dataset size.
So far, I have left the default configuration provided unchanged.
Has anyone experimented with the use_pca parameter of the config file, and how does it affect recognition/retrieval performance? I guess you could also tune the score threshold or the max number of features per image, but again, I am afraid of significatively lowering performance.
Thank you very much in advance
The text was updated successfully, but these errors were encountered: