Skip to content

Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrieval.

License

Notifications You must be signed in to change notification settings

1617226214/hybrid_space

 
 

Repository files navigation

Dual Encoding for Video Retrieval by Text

Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrieval.

image

Requirements

Environments

  • Ubuntu 16.04
  • CUDA 10.1
  • Python 3.8
  • PyTorch 1.5.1

We used Anaconda to setup a deep learning workspace that supports PyTorch. Run the following script to install the required packages.

conda create --name ws_dual_py3 python=3.8
conda activate ws_dual_py3
git clone https://github.com/danieljf24/hybrid_space.git
cd hybrid_space
pip install -r requirements.txt
conda deactivate

Dual Encoding on MSRVTT10K

Required Data

Run the following script to download and extract MSR-VTT (msrvtt10k-resnext101_resnet152.tar.gz(4.3G)) dataset and a pre-trained word2vec (vec500flickr30m.tar.gz(3.0G). The data can also be downloaded from Baidu pan (url, password:p3p0) or Google drive (url). The extracted data is placed in $HOME/VisualSearch/.

ROOTPATH=$HOME/VisualSearch
mkdir -p $ROOTPATH && cd $ROOTPATH

# download and extract dataset
wget http://8.210.46.84:8787/msrvtt10k-resnext101_resnet152.tar.gz
tar zxf msrvtt10k-resnext101_resnet152.tar.gz -C $ROOTPATH

# download and extract pre-trained word2vec
wget http://lixirong.net/data/w2vv-tmm2018/word2vec.tar.gz
tar zxf word2vec.tar.gz -C $ROOTPATH

Model Training and Evaluation

Run the following script to train and evaluate Dual Encoding network with hybrid space on the official partition of MSR-VTT. The video features are the concatenation of ResNeXt-101 and ResNet-152 features.

conda activate ws_dual_py3
./do_all.sh msrvtt10k hybrid resnext101-resnet152

Running the script will do the following things:

  1. Train Dual Encoding network with hybrid space and select a checkpoint that performs best on the validation set as the final model. Notice that we only save the best-performing checkpoint on the validation set to save disk space.
  2. Evaluate the final model on the test set. Note that the dataset has already included vocabulary and concept annotations. If you would like to generate vocabulary and concepts by yourself, run the script ./do_vocab_concept.sh msrvtt10k 1.

If you would like to train Dual Encoding network with latent space (Conference Version), please run the following scrip:

./do_all.sh msrvtt10k latent resnext101-resnet152

To train the model on the Test1k-Miech partition and Test1k-Yu partition of MSR-VTT, please run the following scrip:

./do_all.sh msrvtt10kmiech hybrid resnext101-resnet152
./do_all.sh msrvtt10kyu hybrid resnext101-resnet152

Expected Performance

Run the following script to download and evaluate our trained models on MSR-VTT. The trained models can also be downloaded from Baidu pan (url, password:p3p0). Note that if you would like to evaluate using our trained model, please make sure to use the vocabulary and concept annotations we provided in the msrvtt10k-resnext101_resnet152.tar.gz.

MODELDIR=$HOME/VisualSearch/checkpoints
mkdir -p $MODELDIR

# download trained checkpoints
wegt -P $MODELDIR http://8.210.46.84:8787/checkpoints/msrvtt10k_model_best.pth.tar

# evaluate on official split of MSR-VTT
CUDA_VISIBLE_DEVICES=0 python tester.py --testCollection msrvtt10k --logger_name $MODELDIR  --checkpoint_name msrvtt10k_model_best.pth.tar

In order to evaluate on the other splits, please download corresponding checkpoints and replace the parameter of checkpoint_name to msrvtt10kmiech_model_best.pth.tar(on Test1k-Miech) and to msrvtt10kyu_model_best.pth.tar(on Test1k-Yu). The overview of pre-trained checkpoints on MSR-VTT is as follows.

Split Pre-trained Model
Official msrvtt10k_model_best.pth.tar(264M)
Test1k-Miech msrvtt10kmiech_model_best.pth.tar(267M)
Test1k-Yu msrvtt10kyu_model_best.pth.tar(267M)

The expected performance of Dual Encoding on MSR-VTT is as follows. Notice that due to random factors in SGD based training, the numbers differ slightly from those reported in the paper.

SplitText-to-Video Retrieval Video-to-Text Retrieval SumR
R@1 R@5 R@10 MedR mAP R@1 R@5 R@10 MedR mAP
Official 11.830.641.81721.4 21.645.958.5710.3 210.2
Test1k-Miech 22.750.263.1535.6 24.752.364.2537.2 277.2
Test1k-Yu 21.548.860.2634.0 21.749.061.4634.6 262.6

Dual Encoding on VATEX

Required Data

Download VATEX dataset (vatex-i3d.tar.gz(3.0G)) and a pre-trained word2vec (vec500flickr30m.tar.gz(3.0G)). The data can also be downloaded from Baidu pan (url, password:p3p0) or Google drive (url). Please extract data into $HOME/VisualSearch/.

Model Training and Evaluation

Run the following script to train and evaluate Dual Encoding network with hybrid space on VATEX.

# download and extract dataset
wget http://8.210.46.84:8787/vatex-i3d.tar.gz
tar zxf vatex-i3d.tar.gz -C $ROOTPATH

./do_all.sh vatex hybrid

Expected Performance

Run the following script to download and evaluate our trained model (vatex_model_best.pth.tar(230M)) on VATEX.

MODELDIR=$HOME/VisualSearch/checkpoints

# download trained checkpoints
wegt -P $MODELDIR http://8.210.46.84:8787/checkpoints/vatex_model_best.pth.tar

CUDA_VISIBLE_DEVICES=0 python tester.py --testCollection vatex --logger_name $MODELDIR  --checkpoint_name vatex_model_best.pth.tar

The expected performance of Dual Encoding with hybrid space learning on MSR-VTT is as follows.

SplitText-to-Video Retrieval Video-to-Text Retrieval SumR
R@1 R@5 R@10 MedR mAP R@1 R@5 R@10 MedR mAP
VATEX 35.872.882.9252.0 47.576.085.3239.1 400.3

Dual Encoding on Ad-hoc Video Search (AVS) (still working)

Data

The following three datasets are used for training, validation and testing: tgif-msrvtt10k, tv2016train and iacc.3. For more information about these datasets, please refer to https://github.com/li-xirong/avs.

Run the following scripts to download and extract these datasets. The extracted data is placed in $HOME/VisualSearch/.

Sentence data

Frame-level feature data

ROOTPATH=$HOME/VisualSearch
cd $ROOTPATH

# download and extract dataset
wget http://39.104.114.128/avs/tgif_ResNext-101.tar.gz
tar zxf tgif_ResNext-101.tar.gz

wget http://39.104.114.128/avs/msrvtt10k_ResNext-101.tar.gz
tar zvf msrvtt10k_ResNext-101.tar

wget http://39.104.114.128/avs/tv2016train_ResNext-101.tar.gz
tar zvf tv2016train_ResNext-101.tar.gz

wget http://39.104.114.128/avs/iacc.3_ResNext-101.tar.gz
tar zvf iacc.3_ResNext-101.tar.gz

# combine feature of tgif and msrvtt10k
./do_combine_features.sh

Train Dual Encoding model from scratch

source ~/ws_dual/bin/activate

trainCollection=tgif-msrvtt10k
visual_feature=pyresnext-101_rbps13k,flatten0_output,os

# Generate a vocabulary on the training set
./do_get_vocab.sh $trainCollection

# Generate video frame info
#./do_get_frameInfo.sh $trainCollection $visual_feature


# training and testing
./do_all_avs.sh 

deactive

How to run Dual Encoding on another datasets? (still working)

Store the training, validation and test subset into three folders in the following structure respectively.

${subset_name}
├── FeatureData
│   └── ${feature_name}
│       ├── feature.bin
│       ├── shape.txt
│       └── id.txt
└── TextData
    └── ${subset_name}train.caption.txt
    └── ${subset_name}val.caption.txt
    └── ${subset_name}test.caption.txt
  • FeatureData: video frame features. Using txt2bin.py to convert video frame feature in the required binary format.
  • ${dsubset_name}.caption.txt: caption data. The file structure is as follows, in which the video and sent in the same line are relevant.
video_id_1#1 sentence_1
video_id_1#2 sentence_2
...
video_id_n#1 sentence_k
...

You can run the following script to check whether the data is ready:

./do_format_check.sh ${train_set} ${val_set} ${test_set} ${rootpath} ${feature_name}

where train_set, val_set and test_set indicate the name of training, validation and test set, respectively, ${rootpath} denotes the path where datasets are saved and feature_name is the video frame feature name.

If you pass the format check, use the following script to train and evaluate Dual Encoding on your own dataset:

source ~/ws_dual/bin/activate
./do_all_own_data.sh ${train_set} ${val_set} ${test_set} ${rootpath} ${feature_name} ${caption_num} full
deactive

If training data of your task is relatively limited, we suggest dual encoding with level 2 and 3. Compared to the full edition, this version gives nearly comparable performance on MSR-VTT, but with less trainable parameters.

source ~/ws_dual/bin/activate
./do_all_own_data.sh ${train_set} ${val_set} ${test_set} ${rootpath} ${feature_name} ${caption_num} reduced
deactive

References

If you find the package useful, please consider citing our TPAMI'21 or CVPR'19 paper:

@article{dong2021dual,
  title={Dual Encoding for Video Retrieval by Text},
  author={Dong, Jianfeng and Li, Xirong and Xu, Chaoxi and Yang, Xun and Yang, Gang and Wang, Xun and Wang, Meng},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  doi = {10.1109/TPAMI.2021.3059295},
  year={2021}
}
@inproceedings{cvpr2019-dual-dong,
title = {Dual Encoding for Zero-Example Video Retrieval},
author = {Jianfeng Dong and Xirong Li and Chaoxi Xu and Shouling Ji and Yuan He and Gang Yang and Xun Wang},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019},
}

About

Source code of our TPAMI'21 paper Dual Encoding for Video Retrieval by Text and CVPR'19 paper Dual Encoding for Zero-Example Video Retrieval.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.3%
  • Perl 8.7%
  • Shell 2.0%