Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lidar points xyz in debug scene #24

Open
alanxu89 opened this issue Sep 12, 2023 · 1 comment
Open

lidar points xyz in debug scene #24

alanxu89 opened this issue Sep 12, 2023 · 1 comment

Comments

@alanxu89
Copy link

alanxu89 commented Sep 12, 2023

If lidar rays_o and rays_d are all defined in world coordinate, then why do we still need to do a transformation here? If I dont comment out this line, then the result for my own data seems very wrong in debug scene:

pts_in_world = lidar0.world_transform.forward(pts).data.cpu().numpy()

@SecureSheII
Copy link

SecureSheII commented Oct 10, 2023

I believe the the author made a mistake in README. The coordinate system used to save the lidar npz files are still the local coordinate per frame, rather than the global frame. In fact, I cannot find the code to transform rays_o and rays_d to the world coordinate in preprocess.py:

if pixel_pose_local is not None:
# #---- Optionally downsample on scans (waymo TOP: 64x2650; others 200x600)
# ds_vertical = 4
# ds_horizonal = 1
# rays_o = rays_o[::ds_vertical, ::ds_horizonal]
# rays_d = rays_d[::ds_vertical, ::ds_horizonal]
# pixel_pose_local = pixel_pose_local[:, ::ds_vertical, ::ds_horizonal]
# range_image_range = range_image_range[:, ::ds_vertical, ::ds_horizonal]
# range_image_top_pose_tensor = range_image_top_pose_tensor[::ds_vertical, ::ds_horizonal]
# Waymo: _pixel_pose_local=[vehicle to ENU(world)]
mask_valid = tf.reduce_all(range_image_top_pose_tensor!=0, axis=-1).numpy()
rays_o = rays_o[mask_valid][None,...]
rays_d = rays_d[mask_valid][None,...]
_pixel_pose_local = pixel_pose_local[0].numpy()[mask_valid][None,...]
_range_image_range = range_image_range[0].numpy()[mask_valid][None,...]
_pixel_pose_local[...,:3,3] -= world_offset
# NOTE: Delta-pose the ray to account for ego-car motion during delta timestamps
dpose = np.linalg.inv(frame_pose @ extrinsic) @ _pixel_pose_local @ extrinsic
#-------- OPTION1: save original rays & dpose
# np.savez_compressed(lidar_cur_fpath, rays_o=rays_o, rays_d=rays_d, ranges=_range_image_range, dpose=dpose)
#-------- OPTOIN2: directly saved modified rays; also save dpose just in case of need.
rays_o = tf.einsum('hwij,hwj->hwi', dpose[...,:3,:3], rays_o) + dpose[...,:3,3]
rays_d = tf.einsum('hwij,hwj->hwi', dpose[...,:3,:3], rays_d)
np.savez_compressed(lidar_cur_fpath, rays_o=rays_o.numpy().astype(np.float32), rays_d=rays_d.numpy().astype(np.float32), ranges=_range_image_range.astype(np.float32), dpose=dpose.astype(np.float32))
else:
_range_image_range = range_image_range[0].numpy()
np.savez_compressed(lidar_cur_fpath, rays_o=rays_o.numpy().astype(np.float32), rays_d=rays_d.numpy().astype(np.float32), ranges=_range_image_range.astype(np.float32))

nor in waymo_dataset.py but they are there in the README above.

Someone also confirmed the lidar data to be saved should be in local coordinate in #17 (comment).

And the unit test code for lidar data in waymo_dataset.py also confirmed those saved are in local coords:

l2w = lidars.world_transform[lidar_gts['i']]
pts_local = torch.addcmul(lidar_gts['rays_o'], lidar_gts['rays_d'], lidar_gts['ranges'].unsqueeze(-1))
pts = l2w.forward(pts_local)

But after reading a clearer explanation in #11 (comment), I think the issue is when the lidar data are stored in LiDAR coordinate, then we do need l2w provided as a non-identity matrix but when the lidar data are already stored in world coordinate, then l2w should just be identity. So in your case, maybe you should try changing you l2w to identity since your data are already in world coordinate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants