Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lidar data coordinate system for preprocessed data #11

Closed
xslittlegrass opened this issue Sep 1, 2023 · 2 comments
Closed

Lidar data coordinate system for preprocessed data #11

xslittlegrass opened this issue Sep 1, 2023 · 2 comments

Comments

@xslittlegrass
Copy link

According to the document, the npz lidar files stores the rays_o, rays_d which are in the world coordinate system.
However the reprocessed data here doesn't seem to agree with the above statement.

For example, this computes the max of the norm of ray_o in the session from "Quick download of one sequence(seg100613)'s processed data"

np.max(np.linalg.norm(np.vstack([np.load(f)['rays_o'][0] for f in glob.glob('lidar_TOP/*.npz')]),axis=1))

We expect it to be similar to the vehicle travel distance, but we get 0.42 m instead.

For background, I'm interested in comparing the extracted mesh (through extract_mesh.py) with the lidar ground truth by overlay them together, so I'm looking for a way to merge the lidar point clouds into a single one in the world coordinate frame the same as the extracted mesh.

@ventusff
Copy link
Contributor

ventusff commented Sep 1, 2023

Hi,

I have updated a tutorial for comparing mesh and pointclouds in the common world coordinates. You can check it out here. Remember to git pull and git submodule update --init --recursive to update the repo first.

Apologies for the lack of clarity. There are actually multiple choices for processing the LiDAR data, leading to different combinations of LiDAR transforms and lidar_gts=(rays_o, rays_d, ranges). As long as you apply the lidar_gts with the transform, all of the choices are correct.

The document autonomous_driving.md records one of the simpler choices, mainly for custom datasets (not Waymo datasets), where the transform is an identity matrix and the LiDAR data is directly stored in world coordinates. For our Waymo data, after processing, we actually adopt another choice where the LiDAR transform is not an identity matrix but a pose that moves with the ego car. The GT point cloud is stored in the LiDAR coordinate system.

This is mainly for flexibility. Both settings are read in the same way in our code. We always first read the LiDAR transform (whether it's an identity matrix or a real pose), and then use the transform to convert the point cloud to world coordinates.

To transform the LiDAR GT points to the same "world" coordinates, you need to read the LiDAR and ego_car transform data stored in the sequence data's scenario.pt, and then use the read data to transform the rays_o, rays_d (or the calculated point clouds) stored in the LiDAR local coordinates. You can either directly use the raw data stored in scenario.pt, or first construct a Scene structure loading the scenario.pt and use the convenience provided by the Scene structure to do this (which is recommended). This is what this aforementioned tutorial provides.

By running it, you will get a popped up window like this:
lidar_mesh_single_frame

You can also uncomment the last few lines to compare lidar points from all frames with the extracted mesh:
lidar_mesh_all

You can also get a interactable player by:

lidar_mesh_video.mp4
scene.debug_vis_anim(
    scene_dataloader=scene_dataloader, 
    #  plot_image=True, camera_length=8., 
    plot_lidar=True, lidar_pts_ds=2, 
    mesh_file=mesh_path
)

@xslittlegrass
Copy link
Author

Thanks for the calcification and tutorial!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants