Skip to content

zhaozhongch/DenseSurfelMapping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 

Repository files navigation

Add rgbd support

mapping example

The original author code has added rgbd support but the visualization result is different from this one. The original code uses mesh but this one still uses point cloud to display.
This package Currently cannot use tum RGBD-dataset to test because TUM-RGBD dataset doesn't contain IMU information, which is needed by VIN-FUSION or VINS-Mono.
Also, cannot support realsense2 because the realsense's rgb aligment with depth doesn't share the same field of view, which makes the dense reconstruction looks not so good. The original suthor's code can support realsense2 rgbd.
It is for a dataset that has rgbd image as well as IMU data.
Use dataset ETH-3D to test. Their dataset contains stereo rgbd image as well as IMU information. However, they don't have rosbags so I write a ros package that can convert their dataset to rosbag eth_2_rosbag.

Install and Use

Use VINS-Supproted branch please.

cd catkin_ws/src
git clone https://github.com/zhaozhongch/DenseSurfelMapping.git
cd ..
catkin_make

Assume you have installed eth_2_rosbag pakcage and VINS-FUSION. To run In one terminal, cd to vins workspace,

rosrun vins vins_node path_to_eth_mono_imu_config_yaml_file

I provide the eth_config_yaml_file in eth_2_rosbag. In another terminal, cd to catkin_ws or other workspace where you put the densesurfel fusion package

roslaunch surfel_fusion fuse_eth.launch

In another terminal, cd to vins workspace

roslaunch vins vins_rviz.launch

Finally, in the ros workspace where you put eth_2_rosbag package

roslaunch eth_2_rosbag generate_eth_rosbag.launch

In the dataset you download from ETH-3D is table_3, you'll get a similar plot as the above one in rviz.

DenseSurfelMapping

WARNING!

We have cleaned the code such that it can run without GPU acceleration. The code have not been fully tested after the refactoring. If you have any questions or suggestions, please let us know in the issue.

A depth map fusion method

This is a depth map fusion method following the ICRA 2019 submission Real-time Scalable Dense Surfel Mapping, Kaixuan Wang, Fei Gao, and Shaojie Shen.

Given a sequence of depth images, intensity images, and camera poses, the proposed methods can fuse them into a globally consistent model using surfel representation. The fusion method supports both ORB-SLAM2 and VINS-Mono (a little modification is required) so that you can use it in RGB-D, stereo, or visual-inertial cases according to your setups. We develop the method based on the motivation that the fusion method: (1) can support loop closure (so that it can be consistent with other state-of-the-art SLAM methods), (2) do not require much CPU/memory resources to reconstruct a fine model in real-time, (3) can be scaled to large environments. These requirements are of vital importance in robot navigation tasks that the robot can safly navigate in the environment with odometry-consistent dense maps.

An example to show the usage of the surfel mapping is shown below.

mapping example

Left is the overview of the environment, the middle is the reconstructed results (visualized as point clouds in rviz of ROS) of our method, and right is the result using OpenChisel. We use VINS-Mono to track the camera motion with loop closure, and MVDepthNet to estimate the depth maps. The black line is the path of the camera. In the reconstruction, loop closure is enabled to correct the detected drift. OpenChisel is a great project to reconstruct the environment using the truncated signed distance function (TSDF). However, as shown in the example, it is not suitable to be used with SLAM systems that have loop closure abilities.

The system can also be applied to the KITTI datasets in real-time with only CPU computation.

mapping example

The top row is the reconstruction using stereo cameras and the bottom row is the reconstruction using only the left camera. Details can be found in the paper.

A video can be used to illustrate the performance of the system and how we apply it into an autonomous navigation:

video

Running with VINS-Mono

We have use the surfel fusion with VINS-Mono in lots of UAV projects. For depth estimation, we recommend high quality depth methods/devices, for example MVDepthNet or intel-realsense. Please refer to /launch/fuse_depthnet.launch for detailed parameters. The system takes paired image and depth map as input. Since VINS-Mono publishes imu poses, we also need to receive /vins_estimator/extrinsic for converting imu poses into camera poses.

Ackonwledgement

We thank Gao Fei, Pan Jie, and Wang Luqi, for their contribution to the code and suggestions.

About

develop_dense_surfel_mapping

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published