The pipeline to make the model. It contains training, evaluation, and visualization for 3D detection and 3D semantic segmentation.
- Support priority: Tier S
- Supported dataset
- NuScenes
- T4dataset with 3D detection
- T4dataset with 3D semantic segmentation
- Other supported feature
- Add unit test
Prepare the dataset you use.
- Run docker
docker run -it --rm --gpus '"device=0"' --shm-size=64g --name awml -p 6006:6006 -v $PWD/:/workspace -v $PWD/data:/workspace/data autoware-ml
- Make info files for nuScenes
- If you want to make own pkl, you should change from "nuscenes" to "custom_name"
python tools/detection3d/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
- Run docker
docker run -it --rm --gpus '"device=0"' --shm-size=64g --name awml -p 6006:6006 -v $PWD/:/workspace -v $PWD/data:/workspace/data autoware-ml
- (Choice) Make info files for T4dataset XX1
- This process takes time.
python tools/detection3d/create_data_t4dataset.py --root_path ./data/t4dataset --config autoware_ml/configs/detection3d/dataset/t4dataset/xx1.py --version xx1 --max_sweeps 2 --out_dir ./data/t4dataset/info/user_name
- (Choice) Make info files for T4dataset X2
- This process takes time.
python tools/detection3d/create_data_t4dataset.py --root_path ./data/t4dataset --config autoware_ml/configs/detection3d/dataset/t4dataset/x2.py --version x2 --max_sweeps 2 --out_dir ./data/t4dataset/info/user_name
- You can change batchsize by file name.
- For example, 1×b1 -> 2×b8
- If you use custom pkl file, you need to change pkl file from
nuscenes_infos_train.pkl
.
- Train in general by below command.
- See each projects for detail command of training and evaluation.
python tools/detection3d/train.py {config_file}
- You can use docker command for training as below.
docker run -it --rm --gpus '"device=0"' --name autoware-ml --shm-size=64g -d -v $PWD/:/workspace -v $PWD/data:/workspace/data autoware-ml bash -c '<command for each projects>'
- Run the TensorBoard and navigate to http://127.0.0.1:6006/
tensorboard --logdir work_dirs --bind_all
- Evaluation
python tools/detection3d/test.py {config_file} {checkpoint_file}
- After training, you can test for each epoch checkpoints as below.
min_epoch
is the epoch you want to start to test. If you set 20, test epoch_20.pth, epoch_21.pth, epoch_22.pth...
python tools/detection3d/test_all.py {config_file} {train_results_directory} {min_epoch}
- Visualization for 3D view
- Visualization for BEV view
- This script is simple debug tools.
- This tool don't have image information, so you can use it when you make visualization images for public place like PR.
- Image information often has personal information, so you cannot use it in public place in many case.
python tools/detection3d/visualize_bev.py {config_file} --checkpoint {config_file}
- Visualization for both BEV and image view
- This tool is a simple visulization tool to visualize outputs from a 3D perception model
- It needs to convert non-annotated T4dataset to an info file beforehand
- For example, CenterPoint:
# Generate predictions for t4dataset
DIR="work_dirs/centerpoint/t4dataset/second_secfpn_2xb8_121m_base/" &&
python tools/detection3d/visualize_bboxes.py projects/CenterPoint/configs/t4dataset/second_secfpn_2xb8_121m_base.py $DIR/epoch_50.pth --data-root <new data root> --ann-file-path <info pickle file> --bboxes-score-threshold 0.35 --frame-range 700 1100
where ann-file-path
is a path to the info file , and frame-range
represents the range of frames to visualze
See each projects