Skip to content

Latest commit

 

History

History

PVTv2

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

PVTv2: Improved Baselines with Pyramid Vision Transformer, arxiv

PaddlePaddle training/validation code and pretrained models for PVTv2 Detection.

The official pytorch implementation is here.

This implementation is developed by PaddleViT.

drawing

PVTv2 Model Overview

Update

Update (2021-09-15): Code is released and Mask R-CNN ported weights are uploaded.

Models Zoo

Model backbone box_mAP Model
Mask R-CNN pvtv2_b0 38.3 google/baidu(3kqb)
Mask R-CNN pvtv2_b1 41.8 google/baidu(k5aq)
Mask R-CNN pvtv2_b2 45.2 google/baidu(jh8b)
Mask R-CNN pvtv2_b2_linear 44.1 google/baidu(8ipt)
Mask R-CNN pvtv2_b3 46.9 google/baidu(je4y)
Mask R-CNN pvtv2_b4 47.5 google/baidu(n3ay)
Mask R-CNN pvtv2_b5 47.4 google/baidu(jzq1)

*The results are evaluated on COCO validation set.

  • Backbone model weights can be found in PVTv2 classification here.

Notebooks

We provide a few notebooks in aistudio to help you get started:

*(coming soon)*

Requirements

Data

COCO2017 dataset is used in the following folder structure:

COCO dataset folder
├── annotations
│   ├── captions_train2017.json
│   ├── captions_val2017.json
│   ├── instances_train2017.json
│   ├── instances_val2017.json
│   ├── person_keypoints_train2017.json
│   └── person_keypoints_val2017.json
├── train2017
│   ├── 000000000009.jpg
│   ├── 000000000025.jpg
│   ├── 000000000030.jpg
│   ├── 000000000034.jpg
|   ...
└── val2017
    ├── 000000000139.jpg
    ├── 000000000285.jpg
    ├── 000000000632.jpg
    ├── 000000000724.jpg
    ...

More details about the COCO dataset can be found here and COCO official dataset.

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume the downloaded weight file is stored in ./ pvtv2_b0_maskrcnn.pdparams, to use the pvtv2 model in python:

from config import get_config
from pvtv2_det import build_pvtv2_det
# config files in ./configs/
config = get_config('./configs/pvtv2_b0.yaml')
# build model
model = build_pvtv2_det(config)
# load pretrained weights
model_state_dict = paddle.load('./pvtv2_b0_maskrcnn.pdparams')
model.set_dict(model_state_dict)

Evaluation

To evaluate PVTv2 model performance on COCO2017 with a single GPU, run the following script using command line:

sh run_eval.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
    -cfg=./configs/pvtv2_b0.yaml \
    -dataset=coco \
    -batch_size=4 \
    -data_path=/path/to/dataset/coco/val \
    -eval \
    -pretrained=/path/to/pretrained/model/pvtv2_b0_maskrcnn  # .pdparams is NOT needed
Run evaluation using multi-GPUs:
sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/pvtv2_b0.yaml \
    -dataset=coco \
    -batch_size=4 \
    -data_path=/path/to/dataset/coco/val \
    -eval \
    -pretrained=/path/to/pretrained/model/pvtv2_b0_maskrcnn  # .pdparams is NOT needed

Training

To train the PVTv2 model on COCO2017 with single GPU, run the following script using command line:

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=1 \
python main_single_gpu.py \
    -cfg=./configs/pvtv2_b0.yaml \
    -dataset=coco \
    -batch_size=2 \
    -data_path=/path/to/dataset/coco/train \
    -pretrained=/path/to/pretrained/model/pvtv2_b0  # .pdparams is NOT needed

The pretrined arguments sets the pretrained backbone weights, which can be found in PVTv2 classification here.

Run training using multi-GPUs (coming soon):
sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg=./configs/pvtv2_b0.yaml \
    -dataset=coco \
    -batch_size=2 \
    -data_path=/path/to/dataset/coco/train \
    -pretrained=/path/to/pretrained/model/pvtv2_b0  # .pdparams is NOT needed

The pretrined arguments sets the pretrained backbone weights, which can be found in PVTv2 classification here.

Visualization

coming soon

Reference

@article{wang2021pvtv2,
  title={Pvtv2: Improved baselines with pyramid vision transformer},
  author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling},
  journal={arXiv preprint arXiv:2106.13797},
  year={2021}
}