Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to extract accurate Point Cloud from VDB structure #46

Open
MohamedEllaithy42 opened this issue Jun 25, 2024 · 2 comments
Open

Comments

@MohamedEllaithy42
Copy link

MohamedEllaithy42 commented Jun 25, 2024

First and foremost, thank you for your tremendous effort in creating this repository. It is truly inspiring and highly valuable to the scientific community.

I am currently working on evaluating the mapping accuracy of different frameworks and I am encountering difficulties in accurately extracting the point cloud from the VDB structure generated by VDBFusion.

Specifically, my challenge lies in the discrepancy between the number of points in the original point cloud and the number of points extracted from the VDB structure. Depending on how I set the voxel size and SDF truncation values, I either obtain a higher or lower number of points than originally populated. I assume this issue arises due to the DDA algorithm used.

Here are the details of my process:

  1. Original Point Cloud: The point cloud represented with the PCL library resulting in a total of 2,455,166 points was clustered which resulted in the same number of points as shown in the following visualized original point cloud:

original
original2

  1. VDB Structure: The point cloud represented with the VDB structure -by populating a points grid with the points from the point cloud- was clustered, however, upon extracting the points from this structure, the number of points were found to be only 1,340,674 as shown in the visualized vdb-extracted point cloud:

vdb_extracted
vdb_extracted2

In your publication, "VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data," you mention densely sampling the maps generated by Voxblox and VDBFusion into a point cloud for evaluation purposes. Additionally, there is a reference in the GitHub issues about the ease and efficiency of extracting the point cloud using OpenVDB, but I have not been able to achieve the same accuracy.

I have also followed the suggestions mentioned in the issue "Realtime Point Cloud extraction #22" in order to extract the point cloud from the VDB structure but without success.

I understand that the evaluation code is not provided in the repository. Could you please provide guidance on the relevant script/code snippet that demonstrates how to accurately sample the point cloud from the VDB structure and also share ? Specifically, I am looking for a method to ensure that the extracted point cloud matches the original number of points before clustering.

Additionally, if possible, could you also share the evaluation code you used to compare the mapping accuracy of different frameworks? in ensure a consistent and accurate comparison for my thesis.

Any help or direction you could provide would be greatly appreciated. Thank you for your time and assistance.

and here is the used point extraction function for reference:

`
std::tuple<std::vectorEigen::Vector3d, std::vectorEigen::Vector3d, std::vector<uint8_t>, std::vector> VDBVolume::ExtractPointCloud(float thresh) const {

    std::vector<Eigen::Vector3d> points;
    std::vector<Eigen::Vector3d> colors;
    std::vector<uint8_t> labels;
    std::vector<int> cluster_ids;

    auto weights_accessor = weights_grid->getConstAccessor();
    auto points_accessor = points_grid->getConstAccessor();
    auto colors_accessor = colors_grid->getConstAccessor();
    auto labels_accessor = labels_grid->getConstAccessor();
    auto clusters_accessor = clusters_grid->getConstAccessor();

    for (auto iter = tsdf_grid->cbeginValueOn(); iter.test(); ++iter) {
        const auto &voxel = iter.getCoord();

        const auto &tsdf = iter.getValue();
        const auto tsdf_abs = std::abs(tsdf);

        if (tsdf_abs < thresh) {

            /* const auto point = points_accessor.getValue(voxel); */
            const auto point = GetVoxelCenter(voxel, points_grid->transform());   

            const auto& color = colors_accessor.getValue(voxel);
            const auto& label = labels_accessor.getValue(voxel);
            const auto& cluster_id = clusters_accessor.getValue(voxel);
            
            const auto r = color.x();
            const auto g = color.y();
            const auto b = color.z();

            points.push_back(Eigen::Vector3d(point.x(), point.y(), point.z()));
            colors.push_back(Eigen::Vector3d(r, g, b));
            labels.push_back(static_cast<uint8_t>(label));
            cluster_ids.push_back(cluster_id);
                

            //}
        }
    }
    return std::make_tuple(points, colors, labels, cluster_ids);
}

void vdbfusion::VDBVolume::saveVDBToPCD(const vdbfusion::VDBVolume& vdbVolume, const std::string& outputdirectory) {

    PointCloudXYZRGBLC::Ptr cloud(new PointCloudXYZRGBLC);

    auto [points, colors, labels, cluster_ids] = vdbVolume.ExtractPointCloud(0.7);

    if (points.size() != colors.size() || points.size() != labels.size() || points.size() != cluster_ids.size()) {
        std::cerr << "Inconsistent data sizes in extracted point cloud data." << std::endl;
        return;
    }

    for (size_t i = 0; i < points.size(); ++i) {
        PointXYZRGBI pt;
        pt.x = points[i].x();
        pt.y = points[i].y();
        pt.z = points[i].z();
        pt.r = static_cast<uint8_t>(colors[i].x());
        pt.g = static_cast<uint8_t>(colors[i].y());
        pt.b = static_cast<uint8_t>(colors[i].z());
        pt.SemanticLabel = labels[i];
        pt.cluster_id = cluster_ids[i];

        cloud->points.push_back(pt);
    }

    cloud->width = cloud->points.size();
    cloud->height = 1;
    cloud->is_dense = false;

    std::string outputFilePath = outputdirectory + "No.Pts=" + std::to_string(cloud->points.size()) + "__.pcd";

    // Save the cloud to a PCD file
    pcl::io::savePCDFileBinary(outputFilePath, *cloud);
    std::cout << "\nVDB saved " << cloud->points.size() << " data points." << std::endl;
    std::cout << "--> at" << outputFilePath << std::endl;
}

`

So in other words, you had the input to the algorithm as a point cloud and then through VDBFusion algorithm, we got a vdb structure representation of the data which you visualized specifically using a mesh representation only and didn't extract and visualize a point cloud from the vdb structure, so, my questions, is whether there is a specific reason for not visualizing it as a point cloud?

@MohamedEllaithy42 MohamedEllaithy42 changed the title Request for Guidance on Extracting Accurate Point Cloud from VDB Structure Failed to extract accurate Point Cloud from VDB structure Jun 25, 2024
@Enamel2
Copy link

Enamel2 commented Jul 19, 2024

Maybe you can extract a mesh then sample points from it?

@MohamedEllaithy42
Copy link
Author

Indeed I can thanks for the suggestion, however, I need to directly extract the saved points from the VDB grids as I need all the saved information in the grids (e.g., XYZ, RGB, semantic labels and instance labels) and unfortunately I cannot do so by sampling points from mesh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants