Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result reproduce problem #13

Open
gold-mango opened this issue Oct 28, 2024 · 6 comments
Open

Result reproduce problem #13

gold-mango opened this issue Oct 28, 2024 · 6 comments

Comments

@gold-mango
Copy link

Hi team,

I train with the config hrmapnet_maptrv2_nusc_r50_24ep.py, and get val mAP 0.6242, when use a single GPU for validation, the value is 0.652. I can't get the result 67.2 in your report, could you provide your training log for comparing difference?

thanks

@fishmarch
Copy link
Contributor

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently.
hrmapnet_mapqr.log

Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

@Baiwenjing
Copy link

Hi team,

I train with the config hrmapnet_maptrv2_nusc_r50_24ep.py, and get val mAP 0.6242, when use a single GPU for validation, the value is 0.652. I can't get the result 67.2 in your report, could you provide your training log for comparing difference?

thanks

I have encountered the same problem as you, have you already solved it?

@JunrQ
Copy link

JunrQ commented Nov 18, 2024

Same problem, i trained the hrmapnet_mapqr_nusc_r50_24ep_new and only get 66.68, compared to the author reported 72.6.

@JunrQ
Copy link

JunrQ commented Nov 18, 2024

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. hrmapnet_mapqr.log

Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

Dear author,
I noticed that the log shows a result of 0.7079, but you mentioned that the result is around 66.0–67.0. Am I using the wrong metric?

@fishmarch
Copy link
Contributor

Hi! Unfortunately, I didn't keep those logs. I can only provide a log training 'hrmapnet_mapqr_nusc_r50_24ep.py' recently. hrmapnet_mapqr.log
Besides, I also notice the randomness of the reproduced results. I commonly get around 66.0~67.0. I am further checking the codes and trying to refine the training strategy.

Dear author, I noticed that the log shows a result 0.7079, but why you say the result is around 66.0~67.0? Am i using the wrong metric?

I mean maptrv2-based version would get around 66~67. For mapqr-based version, it gets ~72.6 as in the provided log. It is very strange to get just 66.68 for mapqr-based version, maybe you can also upload the log.

@fishmarch
Copy link
Contributor

I haven't find any problem within the released codes. For current training strategy, the global map is generated for each epoch, then the model is trained with empty maps for early stage and with maps for late stage within each epoch. This seems not good, and may cause more randomness? Thus I'm trying to change the training strategy with some loaded maps. Recently, I don't have enough GPU, if the new strategy is tested well, I will update the codes later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants