-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Descriptor loss train #287
Comments
Hi, are you using exactly the code from this repo or did you plug in some parts of this repo into your own code? Are you using the same parameters as in the master branch? |
Thanks for your prompt reply. |
I see, then it might be a bit tricky for me to help you as it is a different code... It could be an implementation bug, or simply that your reimplementation might need different parameter tuning than this repo. Note that there is also a Pytorch reimplementation of SuperPoint (partially based on this repo) that you might want to check out: https://github.com/eric-yyjau/pytorch-superpoint |
Thanks for your advice. |
Tuning the descriptor loss was quite tricky in my case. But overall, training with a triplet loss is also rather tricky in general. One thing that usually helps in my experience of training with triplet losses is to pre-train the network with a "relaxed" definition of the negative samples. In SuperPoint, given one cell at position (h, w), the corresponding cell (h', w') in the other image is used as positive anchor, while the other cells are considered as negative ones. But the descriptor of let's say pixel (h'+1, w') is also very close to the one of (h, w), thus forcing these two descriptors to be far apart is confusing the network (at least at the beginning of the training). So what you could do is to ignore the neighboring pixels of each positive cell in the descriptor loss, which is equivalent to making the negative samples less hard. Once the training has converged with this easier loss, you can fine-tune with the actual SuperPoint loss to get the best performance. |
Thanks for your experience. |
For SuperPoint, unfortunately not. I did not have to use this trick when I trained it. But I had to use it for other works requiring a triplet loss, one example is here: https://github.com/mihaidusmanu/d2-net/blob/master/lib/loss.py. The loss is a bit different from the SuperPoint one though and is a triplet loss with hardest negative mining. But maybe you can get the idea and apply it to the SuperPoint loss. The |
Thanks very much for your help. SuperPoint/superpoint/models/utils.py Line 110 in 1742343
The train and val descriptor loss are as below with a small amount of data (100 samples). These three line code is used for normalization of dot_descriptor and is it necessary for this operation after descriptors dot? |
I am not sure to fully understand your last question, but these lines with the l2 normalization are a trick to make the correspondences between points more discriminative (i.e. that there is at most one correspodence rather than several similar candidates). The original SuperPoint did not have this trick, and the code should also work if you comment it. But I observed empirically better results with it personally. On the graphs you show, there is a clear overfitting to the small training set, due to the small amount of samples. |
Thanks very much for your help. |
Hi, first thank you for this great work which really helped me a lot!
I want to use the Superpoint model to train on my own data. The detection loss part seems normal but the descriptor loss is oscillating and cannot converge.
The input are 256*256 image and warped image with homography, and the model and loss function are the same with your repo.
The detector and descriptor loss are as below with 300 epochs.
Detector loss
Descriptor loss
Can you give me some advice?
The text was updated successfully, but these errors were encountered: