Cedric Perauer, Laurenz Adrian Heidrich, Haifan Zhang, Matthias Nießner, Anastasiia Kornilova, and Alexey Artemov
We ran the NCuts pipeline with both Python 3.9 (on x86 AMD CPU with Ubuntu 20.04, 128GB RAM and without the RAM intensive map creation on M1/M2 Macbook Air). For self-training/MaskPLS inference we used 3080 and 4090 GPUs with Cuda 11.8.
For Ncuts based extraction install the packages below :
cd autoinst/
sh setup.sh #creates conda env named autoinst
For running the refined MaskPLS model, please refer to the additional instructions in self-training readme
Install the python bindings for Patchwork++ :
git clone [email protected]:url-kaist/patchwork-plusplus.git
sudo apt-get install g++ build-essential libeigen3-dev python3-pip python3-dev cmake -y
conda activate autoinst
make pyinstall
For more details please see their original repo
We use the SemanticKITTI dataset. Including our extracted features, our dataset structure looks like this :
We provide the scripts for extracting the relevant image and point-based features in autoinst/2D-VFMs /dinov2 and autoinst/Pointcloud-Models/tarl.
We also provide the complete data (including extracted aggregated maps) for the first map for testing out our code.
Please download the dataset related files here and unzip the subdirectories.
Then set DATASET_PATH
in config.py to this directory.
Preprocessing the maps requires more memory (our machine used 128GB), while chunk-based GraphCuts can be run on a laptop. Therefore we also provide the aggregation data for the first map here.
You should then set OUT_FOLDER
in config.py to this directory so the maps can be loaded correctly.
Make sure to set the dataset path in autoinst/pipeline/config.py
accordingly. You can also configure the feature combinations/use of MaskPLS in config.py accordingly (by default TARL/Spatial is used). By default, the first map is run for which we provide the data (see links above) and the metrics are computed.
cd autoinst/pipeline/
python run_pipeline.py
If you are interested in using our GraphCuts implementation for your own project, we provide a simple implementation that only uses spatial distances here.
To use MaskPLS inference, simply set the config to config_maskpls_tarl_spatial
.
You can download one of the set of weights here.
To generate training chunks, simply set the flag in autoinst/pipeline/config.py
to True.
Metrics computation is skipped and the output is stored in the according directory.
For self-training, please refer to the corresponding self-training readme. This readme also contains the info for MaskPLS inference.
Method | AP | P/R/F1 | S_assoc |
---|---|---|---|
NCuts Spatial | 41.74% | 86.15%/75.67%/80.57% | 70.19% |
NCuts TARL/Spatial | 53.74% | 87.69%/77.02%/82.01% | 71.05% |
NCuts TARL/Spatial/Dino | 34.33% | 81.65%/60.13%/69.26% | 60.00% |
MaskPLS Tarl/Spatial | 65.93% | 91.53%/80.40%/85.61% | 78.42% |
To run the full pipeline/evaluation of our main method follow these steps :
- Extract TARL Features based on autoinst/Pointcloud-Models/tarl
- Run NCuts to generate the training data, setting the config to
config_tarl_spatial
and GEN_SELF_TRAIN_DATA to True:
cd pipeline/
python run_pipeline.py
- Run Self-Training according to the instructions in the corresponding self-training readme
- Run the pipeline with MaskPLS to obtain full map results :
Make sure the
TEST_MAP
test mode is set to False.
Set the config toconfig_maskpls_tarl_spatial
and set the weights path accordingly.
cd pipeline/
python run_pipeline.py
- The main scripts stores the per sequence results in
pipeline/results
, to average them and obtain the final metrics run (printed out by the script) :
cd pipeline/
python metrics/average_sequences.py
Among others, our project was inspired from/uses code from Unscene3D, MaskPLS,TARL, Dinov2, semantic kitti api, Patchwork++ and we would like to thank the authors for their valuable work.
If you use parts of our code or find our project useful, please consider citing our paper :
@article{perauer2024autoinst,
title={AutoInst: Automatic Instance-Based Segmentation of LiDAR 3D Scans},
author={Perauer, Cedric and Heidrich, Laurenz Adrian and Zhang, Haifan and Nie{\ss}ner, Matthias and Kornilova, Anastasiia and Artemov, Alexey},
journal={arXiv preprint arXiv:2403.16318},
year={2024}
}