-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Result reproduce problem #13
Comments
Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy. |
I have encountered the same problem as you, have you already solved it? |
Same problem, i trained the hrmapnet_mapqr_nusc_r50_24ep_new and only get 66.68, compared to the author reported 72.6. |
Dear author, |
I mean maptrv2-based version would get around 66~67. For mapqr-based version, it gets ~72.6 as in the provided log. It is very strange to get just 66.68 for mapqr-based version, maybe you can also upload the log. |
I haven't find any problem within the released codes. For current training strategy, the global map is generated for each epoch, then the model is trained with empty maps for early stage and with maps for late stage within each epoch. This seems not good, and may cause more randomness? Thus I'm trying to change the training strategy with some loaded maps. Recently, I don't have enough GPU, if the new strategy is tested well, I will update the codes later. |
Hi team,
I train with the config hrmapnet_maptrv2_nusc_r50_24ep.py, and get val mAP 0.6242, when use a single GPU for validation, the value is 0.652. I can't get the result 67.2 in your report, could you provide your training log for comparing difference?
thanks
The text was updated successfully, but these errors were encountered: