-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Neural RGBD Baseline #6
Comments
Hi @XuqianRen! So the reason the numbers are inconsistent is because we changed the culling strategy before running the evaluation. More specifically, we remove the mesh vertice if 1. it's not inside any camera frustum (not observed) 2. it's occluded by other geometry (occluded) 3. it doesn't enough valid depth measurement (such as missing depth area in the window). NeuralRGBD only applied criteria 1 and 2, and 3 is a new criteria we introduced. We also fixed a bug in depth rendering in their original culling script (which affected criteria 2). We applied our new culling strategy to all the reconstructed meshes and gt meshes, and that's why the numbers in our tables are not consistent. For more details, you can refer to our culling script or our supplementary material. You can refer to eval_mesh.py for the whole evaluation pipeline. |
Thank you for your quick reply @JingwenWang95, so you only change the evaluation script, and the training parameters remain the same with the original repositories? |
Yeah, only culling strategy is changed. Everything else is the same. |
Thank you again for your reply. |
Hi, thank you for your excellent project, I noticed that the result of the Neural RGBD baseline and some other baselines you reported in your paper are better than the result in the original Neural RGBD paper. I want to ask what other settings you have changed to get the current baseline results, except the samples you chose for each ray. What is the most effective change to achieve the current better results?
The text was updated successfully, but these errors were encountered: