Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help with Parametrization #44

Open
laurenzheidrich opened this issue Jun 10, 2024 · 4 comments
Open

Help with Parametrization #44

laurenzheidrich opened this issue Jun 10, 2024 · 4 comments

Comments

@laurenzheidrich
Copy link

Hi & thanks for your great repository!

Using Kitti Dataset I get very good results. Now I am trying to run your pipeline on a custom dataset. While the fine detail of cars is astonishing, the reconstruction of large structures such as the street is quite bad. I have tried different parametrizations but the problem stays the same.

Here are two screenshots of the resulting reconstructed mesh

image image

And here is a screenshot of the the merged point cloud:

image

As you can see the merged pointcloud (consisting of 50 scans) is very dense and provides great detail, but the mesh is full of holes and missing parts. I have used the standard Kitti parametrization you provided, but played around with the voxel_size, sdf_trunc, min_weight, fill_holes etc. in the range of [x0.1, x10] and the results were always almost the same.

Can you point me in any direction of what I could change in the parametrization / setting etc.?

@laurenzheidrich
Copy link
Author

laurenzheidrich commented Jun 15, 2024

Translating my point clouds down by 1.5-2.5m tremendeously changed the reconstruction result to the better. I am closing this issue :)

@lijing137
Copy link

Translating my point clouds down by 1.5-2.5m tremendeously changed the reconstruction result to the better. I am closing this issue :)

I encountered a similar issue. Could you please explain it in more detail? Is it about lowering the height of the LiDAR?

@laurenzheidrich
Copy link
Author

Yeah so when I looked at my scans in comparison to Kitti, I saw that the ground points are around 1.5m higher than the ground points of Kitti. So as a last straw to grasp, I just translated all my scans down by 1.5m along the z-axis so that the ground points of my scans and kitti scans are somewhat aligned.

after doing that, VDB started to work perfectly. I don’t know why tbh, not sure if there is some parameter set in their code that detects the ground at a certain pre-defined height or something. Would be great if the authors could say anything about this.

@lijing137
Copy link

Yeah so when I looked at my scans in comparison to Kitti, I saw that the ground points are around 1.5m higher than the ground points of Kitti. So as a last straw to grasp, I just translated all my scans down by 1.5m along the z-axis so that the ground points of my scans and kitti scans are somewhat aligned.

after doing that, VDB started to work perfectly. I don’t know why tbh, not sure if there is some parameter set in their code that detects the ground at a certain pre-defined height or something. Would be great if the authors could say anything about this.

OK,Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants