This is the official implementation code for CycleSiam -- "Self-supervised Object Tracking and Segmentation with Cycle-consistent Siamese Networks". It is built based on SiamMask. For technical details, please refer to :
Self-supervised Object Tracking and Segmentation with Cycle-consistent Siamese Networks
Weihao Yuan, Michael Yu Wang, Qifeng Chen
IROS2020
[Paper]
If you find this code useful, please consider citing:
@inproceedings{yuan2020self,
title={Self-supervised object tracking and segmentation with cycle-consistent siamese networks},
author={Yuan, Weihao and Wang, Michael Yu and Chen, Qifeng},
booktitle={Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={},
year={2020},
organization={IEEE}
}
This code has been tested on Ubuntu 16.04, Python 3.6, Pytorch 0.4.1, CUDA 9.2, RTX 2080 GPUs
- Clone the repository
git clone https://github.com/weihaosky/CycleSiam.git && cd CycleSiam
export CycleSiam=$PWD
- Setup python environment
conda create -n cyclesiam python=3.6
source activate cyclesiam
pip install -r requirements.txt
bash make.sh
- Add the project to your PYTHONPATH
export PYTHONPATH=$PWD:$PYTHONPATH
cd $CycleSiam/experiments/siammask_sharp
- Run
demo.py
cd $CycleSiam/experiments/siammask_sharp
export PYTHONPATH=$PWD:$PYTHONPATH
python ../../tools/demo.py --resume checkpoint_cyclesiam_plus.pth --config config_davis.json
- Download the Youtube-VOS, COCO, ImageNet-DET, and ImageNet-VID.
- Preprocess each datasets according the readme files.
(This model was trained on the ImageNet-1k Dataset)
cd $CycleSiam/experiments
wget http://www.robots.ox.ac.uk/~qwang/resnet.model
ls | grep siam | xargs -I {} cp resnet.model {}
- Setup your environment
- From the experiment directory, run
cd $CycleSiam/experiments/siammask_base/
bash run.sh
- If you experience out-of-memory errors, you can reduce the batch size in
run.sh
. - You can view progress on Tensorboard (logs are at <experiment_dir>/logs/)
- After training, you can test checkpoints on VOT dataset.
bash test_all.sh -s 1 -e 20 -d VOT2018 -g "0 1 2 3" # test all snapshots with 4 GPUs
- Select best model for hyperparametric search.
#bash test_all.sh -m [best_test_model] -d VOT2018 -n [thread_num] -g [gpu_num] # 8 threads with 4 GPUS
bash test_all.sh -m snapshot/checkpoint_e18.pth -d VOT2018 -n 8 -g "0 1 2 3" # 8 threads with 4 GPUS
- Setup your environment
- In the experiment file, train with the best CycleSiam base model
cd $CycleSiam/experiments/siammask_sharp
bash run.sh <best_base_model>
bash run.sh checkpoint_e18.pth
- You can view progress on Tensorboard (logs are at <experiment_dir>/logs/)
- After training, you can test checkpoints on VOT dataset
bash test_all.sh -s 1 -e 20 -d VOT2018 -g "0 1 2 3"
- Select best model for hyperparametric search.
#bash test_all.sh -m [best_test_model] -d VOT2018 -n [thread_num] -g [gpu_num] # 8 threads with 4 GPUS
bash test_all.sh -m snapshot/checkpoint_e19.pth -d VOT2018 -n 8 -g "0 1 2 3" # 8 threads with 4 GPUS
Model | VOT2016 EAO / A / R |
VOT2018 EAO / A / R |
DAVIS2016 J / F |
DAVIS2017 J / F |
Speed |
---|---|---|---|---|---|
CycleSiam | 0.371 / 0.603 / 0.294 | 0.294 / 0.562 / 0.389 | - / - | - / - | 59 |
CycleSiam+ | 0.398 / 0.601 / 0.247 | 0.317 / 0.549 / 0.314 | 64.9 / 62.0 | 50.9 / 56.8 | 44 |
Licensed under an MIT license.