You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If I am correct, BEVFusion uses the depth information during inference which is derived from the lidar2image transformation matrix. However, as the lidar and images are taken at different timestamps, the lidar2image transformation matrix shouldn't always be the same since the vehicle is moving.
However, in your code, I do not see a ROS node that subscribes to the vehicle's pose and that the lidar2image is only initialized once in the model. As such, may I kindly check if you assume that the lidar2image transformation matrix is approximately constant?
The text was updated successfully, but these errors were encountered:
hi, The code uses message_filters of ros。the default lidar and camera data have very close timestamps and are considered synchronized. the camera and lidar are rigidly connected and the acquisition time is also synchronized, lidar2cam considered unchanged When the carrier is in motion.
Hello! Thank you for the work.
If I am correct, BEVFusion uses the depth information during inference which is derived from the
lidar2image
transformation matrix. However, as the lidar and images are taken at different timestamps, thelidar2image
transformation matrix shouldn't always be the same since the vehicle is moving.However, in your code, I do not see a ROS node that subscribes to the vehicle's pose and that the
lidar2image
is only initialized once in the model. As such, may I kindly check if you assume that thelidar2image
transformation matrix is approximately constant?The text was updated successfully, but these errors were encountered: