-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to transform the published range_image to cartesian XYZ point cloud? #250
Comments
@miguelcastillon Before I try to answer your question, what's the reason that made you follow this path? Couldn't you simply use the point cloud published |
I want to record many hours of lidar data and the onboard capacity of my platform is limited. Therefore, I'd like to use the most compact representation of the lidar data, which I believe is the range image. If there's an alternative to it I'd be happy to consider it. |
There are several ways to approach this:
|
Thank you very much for your time. Ideally I'd like to go for the first option. I've implemented your suggestions but the result is still the same. Now, the function looks like this: #include "ouster/client.h"
#include "ouster/types.h"
#include <cv_bridge/cv_bridge.h>
#include <opencv2/core/eigen.hpp>
pcl::PointCloud<pcl::PointXYZ>::Ptr getXYZFromRangeImage(const sensor_msgs::ImageConstPtr& range_image_msg,
const std_msgs::StringConstPtr& metadata_msg)
{
const auto cv_ptr = cv_bridge::toCvCopy(range_image_msg, sensor_msgs::image_encodings::MONO16);
Eigen::MatrixXd eigen_image;
cv::cv2eigen(cv_ptr->image, eigen_image);
const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
eigen_image.cast<uint32_t>();
const auto metadata = ouster::sensor::parse_metadata(metadata_msg->data);
const auto lut =
ouster::make_xyz_lut(metadata.format.columns_per_frame,
metadata.format.pixels_per_column, ouster::sensor::range_unit,
metadata.beam_to_lidar_transform, metadata.lidar_to_sensor_transform,
metadata.beam_azimuth_angles, metadata.beam_altitude_angles);
const Eigen::ArrayX3f lut_direction = lut.direction.cast<float>();
const Eigen::ArrayX3f lut_offset = lut.offset.cast<float>();
const auto range_image_staggered =
ouster::stagger(range_image, metadata.format.pixel_shift_by_row);
Eigen::ArrayX3f cloud(lut_direction.rows(), 3);
ouster::cartesianT(cloud, range_image_staggered, lut_direction, lut_offset);
cloud *= 4.0f;
pcl::PointCloud<pcl::PointXYZ>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZ>);
pointcloud->header.frame_id = range_image_msg->header.frame_id;
pointcloud->header.stamp = range_image_msg->header.stamp.toNSec() / 1000ull;
pointcloud->height = range_image_msg->height;
pointcloud->width = range_image_msg->width;
pointcloud->is_dense = false;
pointcloud->points.resize(pointcloud->width * pointcloud->height);
for (size_t i = 0; i < pointcloud->points.size(); ++i)
{
pointcloud->points[i].x = cloud(i, 0);
pointcloud->points[i].y = cloud(i, 1);
pointcloud->points[i].z = cloud(i, 2);
}
return pointcloud;
} The result has not improved. |
One of my colleagues noted that you are using |
Another thing to note that you shouldn't have to multiply the result by 4. When constructing the lut through |
Ok I understood what the problem was, thank you for your inputs. The thing is that the ouster driver publishes the range images as My mistake was that I was multiplying by 4 after applying the LUT but it needs to be done before. I removed the cloud *= 4.0f; and did const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
eigen_image.cast<uint32_t>() * 4; instead of const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
eigen_image.cast<uint32_t>(); and now I can reconstruct the original point cloud with errors smaller than 2mm. |
You are right, this part of the code wasn't originally authored by me and hadn't had the chance to examine it closely, but I had some doubts about it, hence you can find the comment in the line before which was added by me while refactoring the code. I will log a ticket to fix it. There are image encoding that allow more than 16bits that we could use. |
Hi! I'm reopening this issue because there is something that I overlooked: with the functions that I wrote I can successfully transform range images to XYZ point clouds. However, in order to undistort the scan, I need to know the timestamp of each point. The Thanks! |
Update: I have found that changing the for loop in my function into this works quite well (config static constexpr size_t delta_t = 48828; // 20Hz --> 50e6 ns / 1024 points
for (size_t i = 0; i < pointcloud->points.size(); ++i)
{
pointcloud->points[i].x = cloud(i, 0);
pointcloud->points[i].y = cloud(i, 1);
pointcloud->points[i].z = cloud(i, 2);
pointcloud->points[i].t = delta_t * (i % range_image_msg.width);
} However, when I compare the timestamp of the reconstructed point cloud with the original I get errors up to 20 us. If I print the time difference between consecutive points in the original point cloud, I get values like these:
So they are close to |
@miguelcastillon I am not sure how you came up with these numbers, but in any case please refer to this discussion with regard to the |
Hi, @Samahu, thanks for your answer. The discussion you point to is about the field |
We don't have that information encoded in the image. the |
Hi @Samahu, I was wondering if a ticket was made for this so I can track it? It would be helpful to have a non-truncated range image for my application. Thanks! |
Hi!
I want to create a small c++ ROS node that takes as input the
/ouster/range_image
published by the Ouster ROS driver and that outputs the reconstructed 3D point cloud.I create a look up table from the published sensor metadata (
/ouster/metadata
) like this:Then, I use that table to transform the range image to cartesian coordinates using the
cartesian
function:However, this does not work as expected. In the next image I compare the reconstructed point cloud (in red) and the point cloud published by the Ouster ROS driver (
/ouster/points/
, in white).As you can see, there seem to be a missing size factor. I heuristically found this factor to be 4 (which would seem to match the 4mm-resolution of the published range images).
That link also shows that the published range image is destaggered, so I stagger it before passing it to the function cartesian and then I multiply by 4. This seems to work much better (the reconstructed cloud is much more similar to the original published point cloud)
but there is still some offset, especially in Z (see this lateral view):
I have been through the docs and can't find what I'm missing.
Has anyone successfully converted the published
range_image
to a 3D point cloud?Platform (please complete the following information):
The text was updated successfully, but these errors were encountered: