Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to transform the published range_image to cartesian XYZ point cloud? #250

Closed
miguelcastillon opened this issue Nov 6, 2023 · 15 comments · May be fixed by #349
Closed

How to transform the published range_image to cartesian XYZ point cloud? #250

miguelcastillon opened this issue Nov 6, 2023 · 15 comments · May be fixed by #349
Assignees
Labels
question Further information is requested

Comments

@miguelcastillon
Copy link

Hi!

I want to create a small c++ ROS node that takes as input the /ouster/range_image published by the Ouster ROS driver and that outputs the reconstructed 3D point cloud.

I create a look up table from the published sensor metadata (/ouster/metadata) like this:

const auto metadata = ouster::sensor::parse_metadata(metadata_msg->data);
const auto lut = ouster::make_xyz_lut(metadata);

Then, I use that table to transform the range image to cartesian coordinates using the cartesian function:

  const cv_bridge::CvImagePtr cv_ptr =
      cv_bridge::toCvCopy(range_image_msg, sensor_msgs::image_encodings::MONO16);
  Eigen::MatrixXd eigen_image;
  cv::cv2eigen(cv_ptr->image, eigen_image);
  const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
      eigen_image.cast<uint32_t>();
  const Eigen::Array<double, Eigen::Dynamic, 3> cloud = ouster::cartesian(range_image, lut);

However, this does not work as expected. In the next image I compare the reconstructed point cloud (in red) and the point cloud published by the Ouster ROS driver (/ouster/points/, in white).
A
As you can see, there seem to be a missing size factor. I heuristically found this factor to be 4 (which would seem to match the 4mm-resolution of the published range images).

That link also shows that the published range image is destaggered, so I stagger it before passing it to the function cartesian and then I multiply by 4. This seems to work much better (the reconstructed cloud is much more similar to the original published point cloud)
B
but there is still some offset, especially in Z (see this lateral view):
C

I have been through the docs and can't find what I'm missing.

Has anyone successfully converted the published range_image to a 3D point cloud?

Platform (please complete the following information):

  • Ouster Sensor? OS-0-32-U1
  • Ouster Firmware Version? v2.5.2
  • ROS version/distro? noetic
  • Operating System? Linux
  • Machine Architecture? x64
@miguelcastillon miguelcastillon added the question Further information is requested label Nov 6, 2023
@Samahu Samahu self-assigned this Nov 7, 2023
@Samahu
Copy link
Contributor

Samahu commented Nov 7, 2023

@miguelcastillon Before I try to answer your question, what's the reason that made you follow this path? Couldn't you simply use the point cloud published /ouster/points topic? (the /ouster/points already based off the range measurements)

@miguelcastillon
Copy link
Author

I want to record many hours of lidar data and the onboard capacity of my platform is limited. Therefore, I'd like to use the most compact representation of the lidar data, which I believe is the range image. If there's an alternative to it I'd be happy to consider it.

@Samahu
Copy link
Contributor

Samahu commented Nov 7, 2023

There are several ways to approach this:

  • continue your path and record only the /ouster/range_image then use the same methods supplied by ouster to construct an xyzlut outlined here:
    auto xyz_lut = ouster::make_xyz_lut(
    info.format.columns_per_frame, info.format.pixels_per_column,
    ouster::sensor::range_unit, info.beam_to_lidar_transform,
    additional_transform, info.beam_azimuth_angles,
    info.beam_altitude_angles);
    // The ouster_ros drive currently only uses single precision when it
    // produces the point cloud. So it isn't of a benefit to compute point
    // cloud xyz coordinates using double precision (for the time being).
    lut_direction = xyz_lut.direction.cast<float>();
    lut_offset = xyz_lut.offset.cast<float>();

    and apply that to the range image as shown in this line:
    ouster::cartesianT(points, range, lut_direction, lut_offset);
  • The other approach would be to process the data coming from /ouster/points and only extract and store the xyz values if this still fits your storage bill (12 bytes for xyzw per pixel vs 4 bytes for range only per pixel)
  • You could also wait till the PR SW-5466: Support Velodyne and other point types in ouster-ros driver #216 is merged which readily lets you choose the point cloud with only the xyz field (through pcl::PointXYZ point representation), again this requires 12 bytes vs 4.

@miguelcastillon
Copy link
Author

Thank you very much for your time. Ideally I'd like to go for the first option.

I've implemented your suggestions but the result is still the same.

Now, the function looks like this:

#include "ouster/client.h"
#include "ouster/types.h"

#include <cv_bridge/cv_bridge.h>
#include <opencv2/core/eigen.hpp>

pcl::PointCloud<pcl::PointXYZ>::Ptr getXYZFromRangeImage(const sensor_msgs::ImageConstPtr& range_image_msg,
                                                         const std_msgs::StringConstPtr& metadata_msg)
{
  const auto cv_ptr = cv_bridge::toCvCopy(range_image_msg, sensor_msgs::image_encodings::MONO16);
  Eigen::MatrixXd eigen_image;
  cv::cv2eigen(cv_ptr->image, eigen_image);
  const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
      eigen_image.cast<uint32_t>();

  const auto metadata = ouster::sensor::parse_metadata(metadata_msg->data);
  const auto lut =
      ouster::make_xyz_lut(metadata.format.columns_per_frame,
                           metadata.format.pixels_per_column, ouster::sensor::range_unit,
                           metadata.beam_to_lidar_transform, metadata.lidar_to_sensor_transform,
                           metadata.beam_azimuth_angles, metadata.beam_altitude_angles);

  const Eigen::ArrayX3f lut_direction = lut.direction.cast<float>();
  const Eigen::ArrayX3f lut_offset = lut.offset.cast<float>();

  const auto range_image_staggered =
      ouster::stagger(range_image, metadata.format.pixel_shift_by_row);
  Eigen::ArrayX3f cloud(lut_direction.rows(), 3);
  ouster::cartesianT(cloud, range_image_staggered, lut_direction, lut_offset);
  cloud *= 4.0f;

  pcl::PointCloud<pcl::PointXYZ>::Ptr pointcloud(new pcl::PointCloud<pcl::PointXYZ>);
  pointcloud->header.frame_id = range_image_msg->header.frame_id;
  pointcloud->header.stamp = range_image_msg->header.stamp.toNSec() / 1000ull;
  pointcloud->height = range_image_msg->height;
  pointcloud->width = range_image_msg->width;
  pointcloud->is_dense = false;
  pointcloud->points.resize(pointcloud->width * pointcloud->height);
  for (size_t i = 0; i < pointcloud->points.size(); ++i)
  {
    pointcloud->points[i].x = cloud(i, 0);
    pointcloud->points[i].y = cloud(i, 1);
    pointcloud->points[i].z = cloud(i, 2);
  }
  return pointcloud;
}

The result has not improved.

@Samahu
Copy link
Contributor

Samahu commented Nov 7, 2023

One of my colleagues noted that you are using MONO16 while our range images require 19bit representation. The only udp profile that could fit within 16bits is the RNG15_RL8_NIR8 so you could see if switching to that profile fixes the problem, but for the other udp lidar profiles you need a format that could fit 19bits.

@Samahu
Copy link
Contributor

Samahu commented Nov 7, 2023

Another thing to note that you shouldn't have to multiply the result by 4. When constructing the lut through ouster::make_xyz_lut it already converts the measurements from mm to meters (when using ouster::sensor::range_unit).

@miguelcastillon
Copy link
Author

miguelcastillon commented Nov 8, 2023

Ok I understood what the problem was, thank you for your inputs.

The thing is that the ouster driver publishes the range images as sensor_msgs::image_encodings::MONO16, so they divide the range data by 4 to fit into 16bit (see the >> 2 here).

My mistake was that I was multiplying by 4 after applying the LUT but it needs to be done before. I removed the

cloud *= 4.0f;

and did

const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
      eigen_image.cast<uint32_t>() * 4;

instead of

const Eigen::Ref<const Eigen::Array<uint32_t, -1, -1, Eigen::RowMajor>> range_image =
      eigen_image.cast<uint32_t>();

and now I can reconstruct the original point cloud with errors smaller than 2mm.

@Samahu
Copy link
Contributor

Samahu commented Nov 8, 2023

You are right, this part of the code wasn't originally authored by me and hadn't had the chance to examine it closely, but I had some doubts about it, hence you can find the comment in the line before which was added by me while refactoring the code. I will log a ticket to fix it. There are image encoding that allow more than 16bits that we could use.

@miguelcastillon
Copy link
Author

Hi!

I'm reopening this issue because there is something that I overlooked: with the functions that I wrote I can successfully transform range images to XYZ point clouds. However, in order to undistort the scan, I need to know the timestamp of each point. The ouster_ros::Point has the field t for that, but I'm unsure of how it is encoded in the image. How could I obtain it? Is it in increasing column order?

Thanks!

@miguelcastillon
Copy link
Author

miguelcastillon commented Mar 27, 2024

Update:

I have found that changing the for loop in my function into this works quite well (config 1024x20):

static constexpr size_t delta_t = 48828;  // 20Hz --> 50e6 ns / 1024 points
for (size_t i = 0; i < pointcloud->points.size(); ++i)
{
  pointcloud->points[i].x = cloud(i, 0);
  pointcloud->points[i].y = cloud(i, 1);
  pointcloud->points[i].z = cloud(i, 2);
  pointcloud->points[i].t = delta_t * (i % range_image_msg.width);
}

However, when I compare the timestamp of the reconstructed point cloud with the original I get errors up to 20 us. If I print the time difference between consecutive points in the original point cloud, I get values like these:

45390, 47600, 52240, 46220, 48870, 51430, 49400, 45890, 52700, 48200, 46110, 52310, 44080, 48510, 50780, 48050, 52080, 49580, 47520, 48150, 50750, 46570, 46300, 49860, 51550, 50330, 45240, 49820, 47960, 50340, 50930, 48580, 45450, 47610, 52080, 47280, 51780, 45490, 50580, 45820, 50960, 46580, 52730, 45960, 48160, 50510, 51530, 44780, 50660, 48480 ...

So they are close to 48828 but not quite. Could I somehow have a better guess for delta_t than a constant 48828? I can't seem to find any pattern in the vector.

@Samahu
Copy link
Contributor

Samahu commented Apr 16, 2024

@miguelcastillon I am not sure how you came up with these numbers, but in any case please refer to this discussion with regard to the t field of individual points in the generated point cloud!

@miguelcastillon
Copy link
Author

Hi, @Samahu, thanks for your answer. The discussion you point to is about the field t in ouster_ros::Point, but my question is about how to get this information from the range_image. Is the timestamp of each pixel in the image encoded somewhere?

@Samahu
Copy link
Contributor

Samahu commented Apr 17, 2024

We don't have that information encoded in the image. the t stamps are actually generated by the sensor when laser is emitted/fired, generally they are well spaced (for 1024x20 this happen to be 48828.125 ns) but if you try to interpolate them and compare with what the sensor generates it won't accurately match due to the mechanical movement speed minuscule variations. Using the values reported by the sensor are more accurate.

@headlee
Copy link

headlee commented Aug 1, 2024

You are right, this part of the code wasn't originally authored by me and hadn't had the chance to examine it closely, but I had some doubts about it, hence you can find the comment in the line before which was added by me while refactoring the code. I will log a ticket to fix it. There are image encoding that allow more than 16bits that we could use.

Hi @Samahu, I was wondering if a ticket was made for this so I can track it? It would be helpful to have a non-truncated range image for my application. Thanks!

@Samahu
Copy link
Contributor

Samahu commented Aug 3, 2024

Hi @headlee, sorry for the delay. You may use may track the issue using the following PR #349. This PR isn't ready for merge but if you use the published range on the PR the values won't be truncated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants