You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First and foremost, thank you for your tremendous effort in creating this repository. It is truly inspiring and highly valuable to the scientific community.
I am currently working on evaluating the mapping accuracy of different frameworks and I am encountering difficulties in accurately extracting the point cloud from the VDB structure generated by VDBFusion.
Specifically, my challenge lies in the discrepancy between the number of points in the original point cloud and the number of points extracted from the VDB structure. Depending on how I set the voxel size and SDF truncation values, I either obtain a higher or lower number of points than originally populated. I assume this issue arises due to the DDA algorithm used.
Here are the details of my process:
Original Point Cloud: The point cloud represented with the PCL library resulting in a total of 2,455,166 points was clustered which resulted in the same number of points as shown in the following visualized original point cloud:
VDB Structure: The point cloud represented with the VDB structure -by populating a points grid with the points from the point cloud- was clustered, however, upon extracting the points from this structure, the number of points were found to be only 1,340,674 as shown in the visualized vdb-extracted point cloud:
In your publication, "VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data," you mention densely sampling the maps generated by Voxblox and VDBFusion into a point cloud for evaluation purposes. Additionally, there is a reference in the GitHub issues about the ease and efficiency of extracting the point cloud using OpenVDB, but I have not been able to achieve the same accuracy.
I have also followed the suggestions mentioned in the issue "Realtime Point Cloud extraction #22" in order to extract the point cloud from the VDB structure but without success.
I understand that the evaluation code is not provided in the repository. Could you please provide guidance on the relevant script/code snippet that demonstrates how to accurately sample the point cloud from the VDB structure and also share ? Specifically, I am looking for a method to ensure that the extracted point cloud matches the original number of points before clustering.
Additionally, if possible, could you also share the evaluation code you used to compare the mapping accuracy of different frameworks? in ensure a consistent and accurate comparison for my thesis.
Any help or direction you could provide would be greatly appreciated. Thank you for your time and assistance.
and here is the used point extraction function for reference:
PointCloudXYZRGBLC::Ptr cloud(new PointCloudXYZRGBLC);
auto [points, colors, labels, cluster_ids] = vdbVolume.ExtractPointCloud(0.7);
if (points.size() != colors.size() || points.size() != labels.size() || points.size() != cluster_ids.size()) {
std::cerr << "Inconsistent data sizes in extracted point cloud data." << std::endl;
return;
}
for (size_t i = 0; i < points.size(); ++i) {
PointXYZRGBI pt;
pt.x = points[i].x();
pt.y = points[i].y();
pt.z = points[i].z();
pt.r = static_cast<uint8_t>(colors[i].x());
pt.g = static_cast<uint8_t>(colors[i].y());
pt.b = static_cast<uint8_t>(colors[i].z());
pt.SemanticLabel = labels[i];
pt.cluster_id = cluster_ids[i];
cloud->points.push_back(pt);
}
cloud->width = cloud->points.size();
cloud->height = 1;
cloud->is_dense = false;
std::string outputFilePath = outputdirectory + "No.Pts=" + std::to_string(cloud->points.size()) + "__.pcd";
// Save the cloud to a PCD file
pcl::io::savePCDFileBinary(outputFilePath, *cloud);
std::cout << "\nVDB saved " << cloud->points.size() << " data points." << std::endl;
std::cout << "--> at" << outputFilePath << std::endl;
}
`
So in other words, you had the input to the algorithm as a point cloud and then through VDBFusion algorithm, we got a vdb structure representation of the data which you visualized specifically using a mesh representation only and didn't extract and visualize a point cloud from the vdb structure, so, my questions, is whether there is a specific reason for not visualizing it as a point cloud?
The text was updated successfully, but these errors were encountered:
MohamedEllaithy42
changed the title
Request for Guidance on Extracting Accurate Point Cloud from VDB Structure
Failed to extract accurate Point Cloud from VDB structure
Jun 25, 2024
Indeed I can thanks for the suggestion, however, I need to directly extract the saved points from the VDB grids as I need all the saved information in the grids (e.g., XYZ, RGB, semantic labels and instance labels) and unfortunately I cannot do so by sampling points from mesh.
First and foremost, thank you for your tremendous effort in creating this repository. It is truly inspiring and highly valuable to the scientific community.
I am currently working on evaluating the mapping accuracy of different frameworks and I am encountering difficulties in accurately extracting the point cloud from the VDB structure generated by VDBFusion.
Specifically, my challenge lies in the discrepancy between the number of points in the original point cloud and the number of points extracted from the VDB structure. Depending on how I set the voxel size and SDF truncation values, I either obtain a higher or lower number of points than originally populated. I assume this issue arises due to the DDA algorithm used.
Here are the details of my process:
In your publication, "VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data," you mention densely sampling the maps generated by Voxblox and VDBFusion into a point cloud for evaluation purposes. Additionally, there is a reference in the GitHub issues about the ease and efficiency of extracting the point cloud using OpenVDB, but I have not been able to achieve the same accuracy.
I have also followed the suggestions mentioned in the issue "Realtime Point Cloud extraction #22" in order to extract the point cloud from the VDB structure but without success.
I understand that the evaluation code is not provided in the repository. Could you please provide guidance on the relevant script/code snippet that demonstrates how to accurately sample the point cloud from the VDB structure and also share ? Specifically, I am looking for a method to ensure that the extracted point cloud matches the original number of points before clustering.
Additionally, if possible, could you also share the evaluation code you used to compare the mapping accuracy of different frameworks? in ensure a consistent and accurate comparison for my thesis.
Any help or direction you could provide would be greatly appreciated. Thank you for your time and assistance.
and here is the used point extraction function for reference:
`
std::tuple<std::vectorEigen::Vector3d, std::vectorEigen::Vector3d, std::vector<uint8_t>, std::vector> VDBVolume::ExtractPointCloud(float thresh) const {
void vdbfusion::VDBVolume::saveVDBToPCD(const vdbfusion::VDBVolume& vdbVolume, const std::string& outputdirectory) {
`
So in other words, you had the input to the algorithm as a point cloud and then through VDBFusion algorithm, we got a vdb structure representation of the data which you visualized specifically using a mesh representation only and didn't extract and visualize a point cloud from the vdb structure, so, my questions, is whether there is a specific reason for not visualizing it as a point cloud?
The text was updated successfully, but these errors were encountered: