HFNet-SLAM is the combination and extension of the well-known ORB-SLAM3 SLAM framework and a unified CNN model called HF-Net. It uses the image features from HF-Net to fully replace the hand-crafted ORB features and the BoW method in the ORB-SLAM3 system. This novelty results in better performance in tracking and loop closure, boosting the accuracy of the entire HFNet-SLAM.
Better Tracking:
Better Loop Closure:
Better Runtime Performance
HFNet-SLAM can run at 50 FPS with GPU support.
More details about the differences can be found in the HFNet-SLAM vs. ORB-SLAM3 document.
We use OpenCV, CUDA, and cuDNN, TensorRT in HFNet-SLAM. The corresponding version of these libraries should be chosen wisely according to the devices. The following configurations has been tested:
Name | Version |
---|---|
Ubuntu | 20.04 |
GPU | RTX 2070 Max-Q |
NVIDIA Driver | 510.47 |
OpenCV | 4.2.0 |
CUDA tool | 11.6.2 |
cuDNN | 8.4.1.50 |
TensorRT | 8.5.1 |
TensorFlow(optional) | 1.15, 2.9 |
ROS(optional) | noetic |
We use OpenCV to manipulate images and features.
sudo apt-get install libopencv-dev
Note: While building, please carefully check the output log of the compiler and make sure the OpenCV version is correct. Only 4.2.0 is tested, and a different version might cause terrible compilation problems.
build type: Release -- Found OpenCV: /usr (found suitable version "4.2.0", minimum required is "4.2.0")
We use TensorRT, CUDA, and cuDNN for model inference.
The download and install instructions of CUDA can be found at: https://developer.nvidia.com/cuda-toolkit.
The instructions of cuDNN can be found at: https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html.
The instructions of TensorRT can be found at: https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html.
The converted TensorRT model can be downloaded form here. If you wish to convert the model for yourself, more details about the process can be found in the HF-Net Model Converting document.
chmod +x build.sh
bash build.sh
The Official HF-Net is built on TensorFLow. HFNet-SLAM also support test with the original HF-Net in TensorFlow C++.
-
Install TensorFlow C++: An easy method for building TensorFlow C++ API can be found at: https://github.com/FloopCZ/tensorflow_cc.
-
Edit CMakeLists.txt and rebuild the project
# In line 19, set USE_TENSORFLOW from OFF to ON to enable TensorFlow functions.
set(USE_TENSORFLOW ON)
# In line 132, indicate the installation path for TensorFlow.
set(Tensorflow_Root "PATH/tensorflow_cc/install")
- Download the converted TensorFLow Model files from here.
Some examples using ROS are provided. Building these examples is optional. These have been tested with ROS Noetic under Ubuntu 20.04.
EuRoC.mp4
Evaluate a single sequence with the pure monocular configuration:
pathDataset='PATH/Datasets/EuRoC/'
pathEvaluation='./evaluation/Euroc/'
sequenceName='MH01'
./Examples/Monocular/mono_euroc ./Examples/Monocular/EuRoC.yaml "$pathEvaluation"/"$sequenceName"_MONO/ "$pathDataset"/"$sequenceName" ./Examples/Monocular/EuRoC_TimeStamps/"$sequenceName".txt
python3 ./evaluation/evaluate_ate_scale.py ./evaluation/Ground_truth/EuRoC_left_cam/"$sequenceName"_GT.txt "$pathEvaluation"/"$sequenceName"_MONO/trajectory.txt --verbose --save_path "$pathEvaluation"/"$sequenceName"_MONO/
Evaluate a single sequence with the monocular-inertial configuration:
pathDataset='PATH/Datasets/EuRoC/'
pathEvaluation='./evaluation/Euroc/'
sequenceName='MH01'
./Examples/Monocular-Inertial/mono_inertial_euroc ./Examples/Monocular-Inertial/EuRoC.yaml "$pathEvaluation"/"$sequenceName"_MONO_IN/ "$pathDataset"/"$sequenceName" ./Examples/Monocular-Inertial/EuRoC_TimeStamps/"$sequenceName".txt
python3 ./evaluation/evaluate_ate_scale.py "$pathDataset"/"$sequenceName"/mav0/state_groundtruth_estimate0/data.csv "$pathEvaluation"/"$sequenceName"_MONO_IN/trajectory.txt --verbose --save_path "$pathEvaluation"/"$sequenceName"_MONO_IN/
Evaluate the whole dataset:
bash Examples/eval_euroc.sh
Evaluation results:
TUM-VI.mp4
Evaluate a single sequence with the monocular-inertial configuration:
In 'outdoors' sequences, Use './Examples/Monocular-Inertial/TUM-VI_far.yaml' configuration file instead.
pathDataset='PATH/Datasets/TUM-VI/'
pathEvaluation='./evaluation/TUM-VI/'
sequenceName='dataset-corridor1_512'
./Examples/Monocular-Inertial/mono_inertial_tum_vi ./Examples/Monocular-Inertial/TUM-VI.yaml "$pathEvaluation"/"$sequenceName"/ "$pathDataset"/"$sequenceName"_16/mav0/cam0/data ./Examples/Monocular-Inertial/TUM_TimeStamps/"$sequenceName".txt ./Examples/Monocular-Inertial/TUM_IMU/"$sequenceName".txt
python3 ./evaluation/evaluate_ate_scale.py "$pathDataset"/"$sequenceName"_16/mav0/mocap0/data.csv "$pathEvaluation"/"$sequenceName"/trajectory.txt --verbose --save_path "$pathEvaluation"/"$sequenceName"/
Evaluate the whole dataset:
bash Examples/eval_tum_vi.sh
Evaluation results:
Evaluate a single sequence with the RGB-D configuration:
pathDataset='PATH/Datasets/TUM-RGBD/'
pathEvaluation='./evaluation/TUM-RGBD/'
sequenceName='fr1_desk'
echo "Launching $sequenceName with RGB-D sensor"
./Examples/RGB-D/rgbd_tum ./Examples/RGB-D/TUM1.yaml "$pathEvaluation"/"$sequenceName"/ "$pathDataset"/"$sequenceName"/ ./Examples/RGB-D/associations/"$sequenceName".txt
python3 ./evaluation/evaluate_ate_scale.py "$pathDataset"/"$sequenceName"/groundtruth.txt "$pathEvaluation"/"$sequenceName"/trajectory.txt --verbose --save_path "$pathEvaluation"/"$sequenceName"/
Evaluate the whole dataset:
bash Examples/eval_tum_rgbd.sh
Tested with ROS Noetic and ubuntu 20.04.
- Add the path including Examples/ROS/HFNet_SLAM to the ROS_PACKAGE_PATH environment variable.
export ROS_PACKAGE_PATH=${ROS_PACKAGE_PATH}:PATH/HFNet_SLAM/Examples/ROS
- Execute
build_ros.sh
script:
chmod +x build_ros.sh
./build_ros.sh
- We provide some simple nodes with public benchmarks
# Monocular configuration in EuRoC dataset
roslaunch HFNet_SLAM mono_euroc.launch
# Monocular Inertial configuration in EuRoC dataset
roslaunch HFNet_SLAM mono_inertial_euroc.launch
# RGB-D configuration in TUM-RGBD dataset
roslaunch HFNet_SLAM rgbd_tum.launch