Pengyu Yin · Haozhi Cao · Thien-Minh Nguyen · Shenghai Yuan · Shuyang Zhang · Kangcheng Liu · Lihua Xie
TL;DR. One-shot LiDAR global localization leveraging substructures of 3D scene graphs.
😼Please check out our newly published MCD data set. Precisely annotated point-wise semantic labels for free.
One single LiDAR frame (which is different from accumulative counterparts e.g., mcl-based methods) achieved precise global localization (or pose initialization) by searching on a 3D scene graph.
Running example of Outram on MCD and MulRan against sota LCD (loop closure detection)-based method Global localization is conducted for each of the single frames.
- Ubuntu OS (tested on 20.04)
sudo apt install cmake libeigen3-dev libboost-all-dev
Please follow the official instruction here. Please be informed a version higher than 2.1.0 is required.
We rely on catkin_tool to build Outram. One can also choose catkin_make to build the workspace.
mkdir -p ~/catkin_ws/src
cd ~/catkin_ws/src
git clone [email protected]:Pamphlett/Outram.git
cd Outram && mkdir build && cd build
cmake ..
mv pmc-src/ ../../../build/
cd ~/catkin_ws
catkin build outram
A few inputs are required to make Outram to work:
- LiDAR scans
- Corresponding point-wise semantic label
- Semantic cluster map (centroids and covariance matrices)
- GT pose file (optional)
- Global point cloud map (optional)
We've prepared one sequence (ntu_night_13) of data out of the MCD data set for testing. Use this link to download. Unzip the file and put them into the test_data
subfolder in the project directory.
An example of the anticipated data set structure is shown here:
📦test_data
┣ 📂bin
┃ ┣ 📜frame000000.bin
┃ ┣ 📜frame000001.bin
┃ ┗ 📜...
┣ 📂label
┃ ┣ 📜frame000000.bin
┃ ┣ 📜frame000001.bin
┃ ┗ 📜...
┣ 📂semantic_cluster_map
┃ ┣ 📜cluster_map.pcd
┃ ┗ 📜covariances.bin
┣ 📜bin_filelist.txt
┣ 📜label_filelist.txt
┣ 📜GlobalFullMapSpar.pcd
┗ 📜pose.txt
In the catkin workspace, run the following lines to test on the provided data:
source devel/setup.bash
roslaunch outram global_localization.launch
After running, hit enter to proceed to the next frame.
[Abstract (click to expand)]
One-shot LiDAR localization refers to the ability to estimate the robot pose from one single point cloud, which yields significant advantages in initialization and relocalization processes. In the point cloud domain, the topic has been extensively studied as a global descriptor retrieval (i.e., loop closure detection) and pose refinement (i.e., point cloud registration) problem both in isolation or combined. However, few have explicitly considered the relationship between candidate retrieval and correspondence generation in pose estimation, leaving them brittle to substructure ambiguities. To this end, we propose a hierarchical one-shot localization algorithm called Outram that leverages substructures of 3D scene graphs for locally consistent correspondence searching and global substructure-wise outlier pruning. Such a hierarchical process couples the feature retrieval and the correspondence extraction to resolve the substructure ambiguities by conducting a local-to-global consistency refinement. We demonstrate the capability of Outram in a variety of scenarios in multiple large-scale outdoor datasets.If you have any question, please contact:
- Pengyu Yin {[email protected]}
We would like to show our greatest respect to authors of the following repos for making their works public:
If you find Outram useful, please consider citing:
@article{yin2023outram,
title={Outram: One-shot Global Localization via Triangulated Scene Graph and Global Outlier Pruning},
author={Yin, Pengyu and Cao, Haozhi and Nguyen, Thien-Minh and Yuan, Shenghai and Zhang, Shuyang and Liu, Kangcheng and Xie, Lihua},
journal={arXiv preprint arXiv:2309.08914},
year={2023}
}
Outram stems from one of the MRT (subway) station names in Singapore.