Skip to content

Commit

Permalink
Documentation for the training
Browse files Browse the repository at this point in the history
  • Loading branch information
Bruno Korbar committed Jul 1, 2020
1 parent 03cd95f commit 74dd547
Show file tree
Hide file tree
Showing 5 changed files with 213 additions and 19 deletions.
142 changes: 137 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,6 +1,138 @@
/list/*.csv
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
*.pyc
*.pkl
lib/models/*.pyc
lib/utils/*.pyc
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,5 +17,6 @@ Currently, this codebase supports the following models:


## Suporting Team
This codebase is actively supported by Facebook AI computer vision: @CHJoanna, @weiyaowang, @hengcv, @deeptigp, @dutran, and community researchers @bjuncek (Quantsight, Oxford VGG).
This codebase is actively supported by Facebook AI computer vision: @CHJoanna, @weiyaowang, @hengcv, @deeptigp, @dutran, and community researchers @bjuncek (Quansight, Oxford VGG).


26 changes: 13 additions & 13 deletions c2/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,23 +15,23 @@ We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 1

### R(2+1)D-152

| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPs | params(M) |
| ---------- | --------| ---- | ------- | ------- | -------- | ----- | ------ |
| 32x112x112 | Sports1M | [link](https://www.dropbox.com/s/w5cdqeyqukuaqt7/r2plus1d_152_sports1m_from_scratch_f127111290.pkl?dl=0) | 79.5 | 94.0 | [link](https://www.dropbox.com/s/twvcpe30rxuaf45/r2plus1d_152_ft_kinetics_from_sports1m_f128957437.pkl?dl=0) | 329.1 | 118.0 |
| 32x112x112 | IG-65M | [link](https://www.dropbox.com/s/oqdg176p7nqc84v/r2plus1d_152_ig65m_from_scratch_f106380637.pkl?dl=0) | 81.6 | 95.3 | [link](https://www.dropbox.com/s/tmxuae8ubo5gipy/r2plus1d_152_ft_kinetics_from_ig65m_f107107466.pkl?dl=0) | 329.1 | 118.0 |
| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPs | params(M) |
| ---------- | ------------------ | -------------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | ------------------------------------------------------------------------------------------------------------ | ------ | --------- |
| 32x112x112 | Sports1M | [link](https://www.dropbox.com/s/w5cdqeyqukuaqt7/r2plus1d_152_sports1m_from_scratch_f127111290.pkl?dl=0) | 79.5 | 94.0 | [link](https://www.dropbox.com/s/twvcpe30rxuaf45/r2plus1d_152_ft_kinetics_from_sports1m_f128957437.pkl?dl=0) | 329.1 | 118.0 |
| 32x112x112 | IG-65M | [link](https://www.dropbox.com/s/oqdg176p7nqc84v/r2plus1d_152_ig65m_from_scratch_f106380637.pkl?dl=0) | 81.6 | 95.3 | [link](https://www.dropbox.com/s/tmxuae8ubo5gipy/r2plus1d_152_ft_kinetics_from_ig65m_f107107466.pkl?dl=0) | 329.1 | 118.0 |


### ir-CSN-152
| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) |
| ---------- | ------| ------ | ------- | ------- | -------- | ----- | ------ |
| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/woh99y2hll1mlqv/irCSN_152_Sports1M_from_scratch_f99918785.pkl?dl=0) | 78.2 | 93.0 | [link](https://www.dropbox.com/s/zuoj1aqouh6bo6k/irCSN_152_ft_kinetics_from_Sports1M_f101599884.pkl?dl=0) | 96.7 | 29.6 |
| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/r0kppq7ox6c57no/irCSN_152_ig65m_from_scratch_f125286141.pkl?dl=0) | 82.6 | 95.3 | [link](https://www.dropbox.com/s/gmd8r87l3wmkn3h/irCSN_152_ft_kinetics_from_ig65m_f126851907.pkl?dl=0) | 96.7 | 29.6 |
| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) |
| ---------- | ------------------ | ---------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | --------------------------------------------------------------------------------------------------------- | ------ | --------- |
| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/woh99y2hll1mlqv/irCSN_152_Sports1M_from_scratch_f99918785.pkl?dl=0) | 78.2 | 93.0 | [link](https://www.dropbox.com/s/zuoj1aqouh6bo6k/irCSN_152_ft_kinetics_from_Sports1M_f101599884.pkl?dl=0) | 96.7 | 29.6 |
| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/r0kppq7ox6c57no/irCSN_152_ig65m_from_scratch_f125286141.pkl?dl=0) | 82.6 | 95.3 | [link](https://www.dropbox.com/s/gmd8r87l3wmkn3h/irCSN_152_ft_kinetics_from_ig65m_f126851907.pkl?dl=0) | 96.7 | 29.6 |

### ip-CSN-152
| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) |
| ---------- | ------ | ------ | ------- | ------- | -------- | ----- | ------ |
| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/70di7o7qz6gjq6x/ipCSN_152_Sports1M_from_scratch_f111018543.pkl?dl=0) | 78.8 | 93.5 | [link](https://www.dropbox.com/s/ir7cr0hda36knux/ipCSN_152_ft_kinetics_from_Sports1M_f111279053.pkl?dl=0) | 108.8 | 32.8 |
| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/1ryvx8k7kzs8od6/ipCSN_152_ig65m_from_scratch_f130601052.pkl?dl=0) | 82.5 | 95.3 | [link](https://www.dropbox.com/s/zpp3p0vn2i7bibl/ipCSN_152_ft_kinetics_from_ig65m_f133090949.pkl?dl=0) | 108.8 | 32.8 |
| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) |
| ---------- | ------------------ | ----------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | --------------------------------------------------------------------------------------------------------- | ------ | --------- |
| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/70di7o7qz6gjq6x/ipCSN_152_Sports1M_from_scratch_f111018543.pkl?dl=0) | 78.8 | 93.5 | [link](https://www.dropbox.com/s/ir7cr0hda36knux/ipCSN_152_ft_kinetics_from_Sports1M_f111279053.pkl?dl=0) | 108.8 | 32.8 |
| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/1ryvx8k7kzs8od6/ipCSN_152_ig65m_from_scratch_f130601052.pkl?dl=0) | 82.5 | 95.3 | [link](https://www.dropbox.com/s/zpp3p0vn2i7bibl/ipCSN_152_ft_kinetics_from_ig65m_f133090949.pkl?dl=0) | 108.8 | 32.8 |


## References
Expand All @@ -44,4 +44,4 @@ We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 1
VMZ is Apache 2.0 licensed, as found in the LICENSE file.

## Suporting Team
This codebase is actively supported by some members of CV team (Facebook AI): @CHJoanna, @weiyaowang, @bjuncek, @hengcv, @deeptigp, and @dutran.
This codebase is actively supported by some members of CV team (Facebook AI): @CHJoanna, @weiyaowang, @hengcv, @deeptigp, and @dutran.
19 changes: 19 additions & 0 deletions pt/INSTALL.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
## Installation instruction (conda)

```
# installation pytorch
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch-nightly
# annoyingly this will need installation of old torchvision
pip uninstall torchvision
conda install av -c conda-forge
pip install torchvision==0.5
pip install submitit
# arrgh
pip install tensorflow
pip install -e .
```
42 changes: 42 additions & 0 deletions pt/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# VMZ: Model Zoo for Video Modeling

VMZ is a Caffe2 codebase for video modeling developed by the Computer Vision team at Facebook AI. The aim of this codebase is to help other researchers and industry practitioners:
+ reproduce some of our research results and
+ leverage our very strong pre-trained models.

Currently, this codebase supports the following models:
+ R(2+1)D, MCx models [[1]](https://research.fb.com/wp-content/uploads/2018/04/a-closer-look-at-spatiotemporal-convolutions-for-action-recognition.pdf).
+ CSN models [[2]](https://arxiv.org/pdf/1904.02811.pdf).
+ R(2+1)D and CSN models pre-trained on large-scale (65 million!) weakly-supervised public Instagram videos (**IG-65M**) [[3]](https://research.fb.com/wp-content/uploads/2019/05/Large-scale-weakly-supervised-pre-training-for-video-action-recognition.pdf).

## Main Models

We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 152 layers) which are pre-trained on Sports-1M or **IG-65M**, then fine-tuned on Kinetics-400. Both pre-trained and fine-tuned models are provided in the table below. We hope these models will serve as valuable baselines and feature extractors for the related video modeling tasks such as action detection, video captioning, and video Q&A.

For your convenience, all models are provided in torch hub. Pretrainings available with each respective model definition. Most models
allow following pre-trainings which correspond to their equivalents of caffe2 pretrainings:
```
avail_pretrainings = [
"ig65m_32frms",
"ig_ft_kinetics_32frms",
"sports1m_32frms",
"sports1m_ft_kinetics_32frms",
]
```

This allows the models to be loaded using their respective pre-trainings, using torchhub. If you want to use the model direcly, you can simply import it from `vmz` package.

```
from vmz.models import ir_csn_152
model = ir_csn_152(pretraining="ig65m_32frms")
```


## References
1. D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun and M. Paluri. **A Closer Look at Spatiotemporal Convolutions for Action Recognition.** CVPR 2018.
2. D. Tran, H. Wang, L. Torresani and M. Feiszli. **Video Classification with Channel-Separated Convolutional Networks.** ICCV 2019.
3. D. Ghadiyaram, M. Feiszli, D. Tran, X. Yan, H. Wang and D. Mahajan, **Large-scale weakly-supervised pre-training for video action recognition.** CVPR 2019.


## License
VMZ is Apache 2.0 licensed, as found in the LICENSE file.

0 comments on commit 74dd547

Please sign in to comment.