From 74dd547f7d42e08952b20fdddd1df57efdc94c5d Mon Sep 17 00:00:00 2001 From: Bruno Korbar Date: Wed, 1 Jul 2020 15:02:50 -0700 Subject: [PATCH] Documentation for the training --- .gitignore | 142 ++++++++++++++++++++++++++++++++++++++++++++++++-- README.md | 3 +- c2/README.md | 26 ++++----- pt/INSTALL.md | 19 +++++++ pt/README.md | 42 +++++++++++++++ 5 files changed, 213 insertions(+), 19 deletions(-) diff --git a/.gitignore b/.gitignore index 34d8190..5391d87 100644 --- a/.gitignore +++ b/.gitignore @@ -1,6 +1,138 @@ -/list/*.csv +# Byte-compiled / optimized / DLL files +__pycache__/ +*.py[cod] +*$py.class + +# C extensions +*.so + +# Distribution / packaging +.Python +build/ +develop-eggs/ +dist/ +downloads/ +eggs/ +.eggs/ +lib/ +lib64/ +parts/ +sdist/ +var/ +wheels/ +share/python-wheels/ +*.egg-info/ +.installed.cfg +*.egg +MANIFEST + +# PyInstaller +# Usually these files are written by a python script from a template +# before PyInstaller builds the exe, so as to inject date/other infos into it. +*.manifest +*.spec + +# Installer logs +pip-log.txt +pip-delete-this-directory.txt + +# Unit test / coverage reports +htmlcov/ +.tox/ +.nox/ +.coverage +.coverage.* +.cache +nosetests.xml +coverage.xml +*.cover +*.py,cover +.hypothesis/ +.pytest_cache/ +cover/ + +# Translations +*.mo +*.pot + +# Django stuff: *.log -*.pyc -*.pkl -lib/models/*.pyc -lib/utils/*.pyc +local_settings.py +db.sqlite3 +db.sqlite3-journal + +# Flask stuff: +instance/ +.webassets-cache + +# Scrapy stuff: +.scrapy + +# Sphinx documentation +docs/_build/ + +# PyBuilder +.pybuilder/ +target/ + +# Jupyter Notebook +.ipynb_checkpoints + +# IPython +profile_default/ +ipython_config.py + +# pyenv +# For a library or package, you might want to ignore these files since the code is +# intended to run in multiple environments; otherwise, check them in: +# .python-version + +# pipenv +# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. +# However, in case of collaboration, if having platform-specific dependencies or dependencies +# having no cross-platform support, pipenv may install dependencies that don't work, or not +# install all needed dependencies. +#Pipfile.lock + +# PEP 582; used by e.g. github.com/David-OConnor/pyflow +__pypackages__/ + +# Celery stuff +celerybeat-schedule +celerybeat.pid + +# SageMath parsed files +*.sage.py + +# Environments +.env +.venv +env/ +venv/ +ENV/ +env.bak/ +venv.bak/ + +# Spyder project settings +.spyderproject +.spyproject + +# Rope project settings +.ropeproject + +# mkdocs documentation +/site + +# mypy +.mypy_cache/ +.dmypy.json +dmypy.json + +# Pyre type checker +.pyre/ + +# pytype static type analyzer +.pytype/ + +# Cython debug symbols +cython_debug/ \ No newline at end of file diff --git a/README.md b/README.md index a3f8335..21653d8 100644 --- a/README.md +++ b/README.md @@ -17,5 +17,6 @@ Currently, this codebase supports the following models: ## Suporting Team -This codebase is actively supported by Facebook AI computer vision: @CHJoanna, @weiyaowang, @hengcv, @deeptigp, @dutran, and community researchers @bjuncek (Quantsight, Oxford VGG). +This codebase is actively supported by Facebook AI computer vision: @CHJoanna, @weiyaowang, @hengcv, @deeptigp, @dutran, and community researchers @bjuncek (Quansight, Oxford VGG). + diff --git a/c2/README.md b/c2/README.md index 8ef4b6b..57a40bb 100644 --- a/c2/README.md +++ b/c2/README.md @@ -15,23 +15,23 @@ We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 1 ### R(2+1)D-152 -| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPs | params(M) | -| ---------- | --------| ---- | ------- | ------- | -------- | ----- | ------ | -| 32x112x112 | Sports1M | [link](https://www.dropbox.com/s/w5cdqeyqukuaqt7/r2plus1d_152_sports1m_from_scratch_f127111290.pkl?dl=0) | 79.5 | 94.0 | [link](https://www.dropbox.com/s/twvcpe30rxuaf45/r2plus1d_152_ft_kinetics_from_sports1m_f128957437.pkl?dl=0) | 329.1 | 118.0 | -| 32x112x112 | IG-65M | [link](https://www.dropbox.com/s/oqdg176p7nqc84v/r2plus1d_152_ig65m_from_scratch_f106380637.pkl?dl=0) | 81.6 | 95.3 | [link](https://www.dropbox.com/s/tmxuae8ubo5gipy/r2plus1d_152_ft_kinetics_from_ig65m_f107107466.pkl?dl=0) | 329.1 | 118.0 | +| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPs | params(M) | +| ---------- | ------------------ | -------------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | ------------------------------------------------------------------------------------------------------------ | ------ | --------- | +| 32x112x112 | Sports1M | [link](https://www.dropbox.com/s/w5cdqeyqukuaqt7/r2plus1d_152_sports1m_from_scratch_f127111290.pkl?dl=0) | 79.5 | 94.0 | [link](https://www.dropbox.com/s/twvcpe30rxuaf45/r2plus1d_152_ft_kinetics_from_sports1m_f128957437.pkl?dl=0) | 329.1 | 118.0 | +| 32x112x112 | IG-65M | [link](https://www.dropbox.com/s/oqdg176p7nqc84v/r2plus1d_152_ig65m_from_scratch_f106380637.pkl?dl=0) | 81.6 | 95.3 | [link](https://www.dropbox.com/s/tmxuae8ubo5gipy/r2plus1d_152_ft_kinetics_from_ig65m_f107107466.pkl?dl=0) | 329.1 | 118.0 | ### ir-CSN-152 -| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) | -| ---------- | ------| ------ | ------- | ------- | -------- | ----- | ------ | -| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/woh99y2hll1mlqv/irCSN_152_Sports1M_from_scratch_f99918785.pkl?dl=0) | 78.2 | 93.0 | [link](https://www.dropbox.com/s/zuoj1aqouh6bo6k/irCSN_152_ft_kinetics_from_Sports1M_f101599884.pkl?dl=0) | 96.7 | 29.6 | -| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/r0kppq7ox6c57no/irCSN_152_ig65m_from_scratch_f125286141.pkl?dl=0) | 82.6 | 95.3 | [link](https://www.dropbox.com/s/gmd8r87l3wmkn3h/irCSN_152_ft_kinetics_from_ig65m_f126851907.pkl?dl=0) | 96.7 | 29.6 | +| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) | +| ---------- | ------------------ | ---------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | --------------------------------------------------------------------------------------------------------- | ------ | --------- | +| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/woh99y2hll1mlqv/irCSN_152_Sports1M_from_scratch_f99918785.pkl?dl=0) | 78.2 | 93.0 | [link](https://www.dropbox.com/s/zuoj1aqouh6bo6k/irCSN_152_ft_kinetics_from_Sports1M_f101599884.pkl?dl=0) | 96.7 | 29.6 | +| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/r0kppq7ox6c57no/irCSN_152_ig65m_from_scratch_f125286141.pkl?dl=0) | 82.6 | 95.3 | [link](https://www.dropbox.com/s/gmd8r87l3wmkn3h/irCSN_152_ft_kinetics_from_ig65m_f126851907.pkl?dl=0) | 96.7 | 29.6 | ### ip-CSN-152 -| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) | -| ---------- | ------ | ------ | ------- | ------- | -------- | ----- | ------ | -| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/70di7o7qz6gjq6x/ipCSN_152_Sports1M_from_scratch_f111018543.pkl?dl=0) | 78.8 | 93.5 | [link](https://www.dropbox.com/s/ir7cr0hda36knux/ipCSN_152_ft_kinetics_from_Sports1M_f111279053.pkl?dl=0) | 108.8 | 32.8 | -| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/1ryvx8k7kzs8od6/ipCSN_152_ig65m_from_scratch_f130601052.pkl?dl=0) | 82.5 | 95.3 | [link](https://www.dropbox.com/s/zpp3p0vn2i7bibl/ipCSN_152_ft_kinetics_from_ig65m_f133090949.pkl?dl=0) | 108.8 | 32.8 | +| Input size | Pretrained dataset | Pretrained model | Video@1 Kinetics | Video@5 Kinetics | Finetuned model | GFLOPS | params(M) | +| ---------- | ------------------ | ----------------------------------------------------------------------------------------------------- | ---------------- | ---------------- | --------------------------------------------------------------------------------------------------------- | ------ | --------- | +| 32x224x224 | Sports1M | [link](https://www.dropbox.com/s/70di7o7qz6gjq6x/ipCSN_152_Sports1M_from_scratch_f111018543.pkl?dl=0) | 78.8 | 93.5 | [link](https://www.dropbox.com/s/ir7cr0hda36knux/ipCSN_152_ft_kinetics_from_Sports1M_f111279053.pkl?dl=0) | 108.8 | 32.8 | +| 32x224x224 | IG-65M | [link](https://www.dropbox.com/s/1ryvx8k7kzs8od6/ipCSN_152_ig65m_from_scratch_f130601052.pkl?dl=0) | 82.5 | 95.3 | [link](https://www.dropbox.com/s/zpp3p0vn2i7bibl/ipCSN_152_ft_kinetics_from_ig65m_f133090949.pkl?dl=0) | 108.8 | 32.8 | ## References @@ -44,4 +44,4 @@ We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 1 VMZ is Apache 2.0 licensed, as found in the LICENSE file. ## Suporting Team -This codebase is actively supported by some members of CV team (Facebook AI): @CHJoanna, @weiyaowang, @bjuncek, @hengcv, @deeptigp, and @dutran. +This codebase is actively supported by some members of CV team (Facebook AI): @CHJoanna, @weiyaowang, @hengcv, @deeptigp, and @dutran. diff --git a/pt/INSTALL.md b/pt/INSTALL.md index e69de29..340f437 100644 --- a/pt/INSTALL.md +++ b/pt/INSTALL.md @@ -0,0 +1,19 @@ +## Installation instruction (conda) + +``` +# installation pytorch +conda install pytorch torchvision cudatoolkit=10.1 -c pytorch-nightly + +# annoyingly this will need installation of old torchvision +pip uninstall torchvision +conda install av -c conda-forge +pip install torchvision==0.5 + +pip install submitit + +# arrgh +pip install tensorflow + +pip install -e . + +``` \ No newline at end of file diff --git a/pt/README.md b/pt/README.md index e69de29..bd3974d 100644 --- a/pt/README.md +++ b/pt/README.md @@ -0,0 +1,42 @@ +# VMZ: Model Zoo for Video Modeling + +VMZ is a Caffe2 codebase for video modeling developed by the Computer Vision team at Facebook AI. The aim of this codebase is to help other researchers and industry practitioners: ++ reproduce some of our research results and ++ leverage our very strong pre-trained models. + +Currently, this codebase supports the following models: ++ R(2+1)D, MCx models [[1]](https://research.fb.com/wp-content/uploads/2018/04/a-closer-look-at-spatiotemporal-convolutions-for-action-recognition.pdf). ++ CSN models [[2]](https://arxiv.org/pdf/1904.02811.pdf). ++ R(2+1)D and CSN models pre-trained on large-scale (65 million!) weakly-supervised public Instagram videos (**IG-65M**) [[3]](https://research.fb.com/wp-content/uploads/2019/05/Large-scale-weakly-supervised-pre-training-for-video-action-recognition.pdf). + +## Main Models + +We provide our latest video models including R(2+1)D, ir-CSN, ip-CSN (all with 152 layers) which are pre-trained on Sports-1M or **IG-65M**, then fine-tuned on Kinetics-400. Both pre-trained and fine-tuned models are provided in the table below. We hope these models will serve as valuable baselines and feature extractors for the related video modeling tasks such as action detection, video captioning, and video Q&A. + +For your convenience, all models are provided in torch hub. Pretrainings available with each respective model definition. Most models +allow following pre-trainings which correspond to their equivalents of caffe2 pretrainings: +``` +avail_pretrainings = [ + "ig65m_32frms", + "ig_ft_kinetics_32frms", + "sports1m_32frms", + "sports1m_ft_kinetics_32frms", +] +``` + +This allows the models to be loaded using their respective pre-trainings, using torchhub. If you want to use the model direcly, you can simply import it from `vmz` package. + +``` +from vmz.models import ir_csn_152 +model = ir_csn_152(pretraining="ig65m_32frms") +``` + + +## References +1. D. Tran, H. Wang, L. Torresani, J. Ray, Y. LeCun and M. Paluri. **A Closer Look at Spatiotemporal Convolutions for Action Recognition.** CVPR 2018. +2. D. Tran, H. Wang, L. Torresani and M. Feiszli. **Video Classification with Channel-Separated Convolutional Networks.** ICCV 2019. +3. D. Ghadiyaram, M. Feiszli, D. Tran, X. Yan, H. Wang and D. Mahajan, **Large-scale weakly-supervised pre-training for video action recognition.** CVPR 2019. + + +## License +VMZ is Apache 2.0 licensed, as found in the LICENSE file.