-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot replicate the result after pretraining on COCO #19
Comments
Hi @Skaldak Thanks for asking! I'm actually curious about the performance gap obtained by your retraining. And could you please check this line is Also, have you compiled all the lib files correctly in the suggested environments and completed the whole training of 20 epochs? |
Thanks for your explanation. |
The results in paper are obtained with Training directly with PyTorch 1.0 with lib files complied with PyTorch 0.4 could potentially brings some unpredictable issues in my opinion, I'm not sure you could get the same results even though you can train with compiled files (see some discussion here) Another thing to check is that the backbone weight initialization, in my case, I simply initialized it with the ImageNet pretrained ResNet-101 even though the actual backbone is ResNet-50 for COCO. For reproducible concerns, we've recently retrained the base-class training using ResNet-101 as backbone with |
Does anyone replicate the result on pascal voc with pytorch>=1.0? I only get the map=10.8 |
May I use your code for reference? My device doesn't support CUDA 8.0. The result I obtained by using the author's code for reproduction is very poor, the accuracy of 10 shot is only 21.35%. |
Thanks for your outstanding work and remarkable results!
However, after several attempts on different random seeds, we cannot replicate the result (10 shot mAP 12.5 and 30 shot mAP 14.7) after pretraining on COCO by ourselves. To our surprise, the result seems just fine when we use your pretrained model on COCO.
Also, we find that you only use 20000+ base only images in pretraining phase, rather than images with base and novel ground truths. Is this for not recognizing novel classes as background in RPN? Could you maybe explain this?
The text was updated successfully, but these errors were encountered: