Skip to content

Latest commit

 

History

History
130 lines (97 loc) · 4.5 KB

README.md

File metadata and controls

130 lines (97 loc) · 4.5 KB

Deep SORT - PyTorch

PyTorch implementation of the paper Deep SORT.

The file todo.sh contains all build instructions, so either run it with ./todo.sh or copy paste the following into your shell.

docker build -t deepsort .

docker volume create --opt type=none \
                     --opt o=bind \
                     --opt device=. \
                     dpsrt-vol

docker run --gpus all -it \
           -p 5001:6006 \
           --name deepsort_cnt \
           -v dpsrt-vol:/home/PyTorch-YOLOv3/:rw \
           deepsort

cd Object-Tracking/deep_sort_pytorch

python demo_yolo3_deepsort.py /home/frank/modelbunker/models/Object-Tracking/deep_sort_pytorch/images/bab.avi

Detailled README.

Introduction

This is an implement of MOT tracking algorithm deep sort. Deep sort is basicly the same with sort but added a CNN model to extract features in image of human part bounded by a detector. This CNN model is indeed a RE-ID model and the detector used in PAPER is FasterRCNN , and the original source code is HERE.
However in original code, the CNN model is implemented with tensorflow, which I'm not familier with. SO I re-implemented the CNN feature extraction model with PyTorch, and changed the CNN model a little bit. Also, I use YOLOv3 to generate bboxes instead of FasterRCNN.

Dependencies

  • python 3 (python2 not sure)
  • numpy
  • scipy
  • opencv-python
  • sklearn
  • pytorch 0.4 or 1.x

Quick Start

  1. Check all dependencies installed
pip install -r requirements.txt

for user in china, you can specify pypi source to accelerate install like:

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple 
  1. Clone this repository
git clone [email protected]:ZQPei/deep_sort_pytorch.git
  1. Download YOLOv3 parameters
cd YOLOv3/
wget https://pjreddie.com/media/files/yolov3.weights
cd ..
  1. Download deepsort parameters ckpt.t7
cd deep_sort/deep/checkpoint
# download ckpt.t7 from 
https://drive.google.com/drive/folders/1xhG0kRH1EX5B9_Iz8gQJb7UNnn_riXi6 to this folder
cd ../../../
  1. Run demo
usage: demo_yolo3_deepsort.py VIDEO_PATH
                              [--help] 
                              [--yolo_cfg YOLO_CFG]
                              [--yolo_weights YOLO_WEIGHTS]
                              [--yolo_names YOLO_NAMES]
                              [--conf_thresh CONF_THRESH]
                              [--nms_thresh NMS_THRESH]
                              [--deepsort_checkpoint DEEPSORT_CHECKPOINT]
                              [--max_dist MAX_DIST] [--ignore_display]
                              [--display_width DISPLAY_WIDTH]
                              [--display_height DISPLAY_HEIGHT]
                              [--save_path SAVE_PATH]          
                              [--use_cuda USE_CUDA]          

All files above can also be accessed from BaiduDisk!
linker:https://pan.baidu.com/s/1TEFdef9tkJVT0Vf0DUZvrg
passwd:1eqo

Training the RE-ID model

The original model used in paper is in original_model.py, and its parameter here original_ckpt.t7.

To train the model, first you need download Market1501 dataset or Mars dataset.

Then you can try train.py to train your own parameter and evaluate it using test.py and evaluate.py. train.jpg

Demo videos and images

demo.avi

1.jpg 2.jpg

Latest Update(07-22)

Changes

  • bug fix (Thanks @JieChen91 and @yingsen1 for bug reporting).
  • using batch for feature extracting for each frame, which lead to a small speed up.
  • code improvement.

Futher improvement direction

  • Train detector on specific dataset rather than the official one.
  • Retrain REID model on pedestrain dataset for better performance.
  • Replace YOLOv3 detector with advanced ones.

References