Skip to content

Commit

Permalink
update setup and docs in v0.2.7
Browse files Browse the repository at this point in the history
  • Loading branch information
Lupin1998 committed Dec 25, 2022
1 parent 1f47b5c commit baece8e
Show file tree
Hide file tree
Showing 52 changed files with 508 additions and 397 deletions.
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1 +1 @@
We appreciate all contributions to improve OpenMixup. We follow the developing standard of MMLab. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.
We appreciate all contributions to improve OpenMixup. Currently, we follow the developing standard of MMLab. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/----.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ assignees: ''

### 相关信息

1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出
1. `pip list | grep "openmixup\|^torch"` 命令的输出
\[填写这里\]
2. 如果你修改了,或者使用了新的配置文件,请在这里写明

Expand All @@ -29,5 +29,5 @@ assignees: ''

3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息
\[填写这里\]
4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明
4. 如果你对 `openmixup` 文件夹下的代码做了其他相关的修改,请在这里写明
\[填写这里\]
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/---bug.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ assignees: ''

### 相关信息

1. `pip list | grep "mmcv\|mmcls\|^torch"` 命令的输出
1. `pip list | grep "openmixup\|^torch"` 命令的输出
\[填写这里\]
2. 如果你修改了,或者使用了新的配置文件,请在这里写明

Expand All @@ -34,7 +34,7 @@ assignees: ''

3. 如果你是在训练过程中遇到的问题,请填写完整的训练日志和报错信息
\[填写这里\]
4. 如果你对 `mmcls` 文件夹下的代码做了其他相关的修改,请在这里写明
4. 如果你对 `openmixup` 文件夹下的代码做了其他相关的修改,请在这里写明
\[填写这里\]

### 附加内容
Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The command you executed.

### Post related information

1. The output of `pip list | grep "mmcv\|mmcls\|^torch"`
1. The output of `pip list | grep "openmixup\|^torch"`
\[here\]
2. Your config file if you modified it or created a new one.

Expand All @@ -32,7 +32,7 @@ The command you executed.

3. Your train log file if you meet the problem during training.
\[here\]
4. Other code you modified in the `mmcls` folder.
4. Other code you modified in the `openmixup` folder.
\[here\]

### Additional context
Expand Down
4 changes: 2 additions & 2 deletions .github/ISSUE_TEMPLATE/general-questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ assignees: ''

### Post related information

1. The output of `pip list | grep "mmcv\|mmcls\|^torch"`
1. The output of `pip list | grep "openmixup\|^torch"`
\[here\]
2. Your config file if you modified it or created a new one.

Expand All @@ -27,5 +27,5 @@ assignees: ''

3. Your train log file if you meet the problem during training.
\[here\]
4. Other code you modified in the `mmcls` folder.
4. Other code you modified in the `openmixup` folder.
\[here\]
2 changes: 1 addition & 1 deletion .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Thanks for your contribution to OpenMixup and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. We follow the standard in MMLab, e.g., [MMClassification](https://github.com/open-mmlab/mmclassification). If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Thanks for your contribution to [OpenMixup](https://github.com/Westlake-AI/openmixup) and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. Currently, we follow the standard in MMLab, e.g., [MMClassification](https://github.com/open-mmlab/mmclassification). If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

## Motivation

Expand Down
7 changes: 3 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -104,17 +104,15 @@ venv.bak/
# mypy
.mypy_cache/

openmixup/version.py
version.py
# custom
data
.vscode
.idea

# custom
*.pkl
*.pkl.json
*.log.json
work_dirs/
/openmixup/.mim
tools/exp_bash/
pretrains

Expand All @@ -137,4 +135,5 @@ configs/selfsup_IP89
# configs/semisup
# configs/selfsup
*.json
*.toml
openmixup/models/classifiers/backup
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@
same "printed page" as the copyright notice for easier
identification within third-party archives.

Copyright 2020-2021 Open-MMLab.
Copyright 2021-2022 CAIRI AI Lab.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
Expand Down
3 changes: 3 additions & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
include requirements/*.txt
recursive-include openmixup/.mim/configs *.py *.yml
recursive-include openmixup/.mim/tools *.sh *.py
83 changes: 23 additions & 60 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# OpenMixup
[![Release](https://img.shields.io/badge/release-V0.2.7-%09%2360004F)](https://github.com/Westlake-AI/openmixup/releases)
[![Docs](https://img.shields.io/badge/docs-latest-%23002FA7)](https://openmixup.readthedocs.io/en/latest/)
[![release](https://img.shields.io/badge/release-V0.2.7-%09%2360004F)](https://github.com/Westlake-AI/openmixup/releases)
[![PyPI](https://img.shields.io/pypi/v/openmixup)](https://pypi.org/project/openmixup)
[![docs](https://img.shields.io/badge/docs-latest-%23002FA7)](https://openmixup.readthedocs.io/en/latest/)
[![license](https://img.shields.io/badge/license-Apache--2.0-%23B7A800)](https://github.com/Westlake-AI/openmixup/blob/main/LICENSE)
![open issues](https://img.shields.io/github/issues-raw/Westlake-AI/openmixup?color=%23FF9600)
[![issue resolution](https://img.shields.io/badge/issue%20resolution-1%20d-%23009763)](https://github.com/Westlake-AI/openmixup/issues)
Expand All @@ -12,14 +13,6 @@
[🔍Awesome MIM](https://openmixup.readthedocs.io/en/latest/awesome_selfsup/MIM.html) |
[🆕News](https://openmixup.readthedocs.io/en/latest/changelog.html)

## News and Updates

[2022-12-16] `OpenMixup` v0.2.7 is released (issue [#35](https://github.com/Westlake-AI/openmixup/issues/35)).

[2022-12-02] Update new features and documents of `OpenMixup` v0.2.6 (issue [#24](https://github.com/Westlake-AI/openmixup/issues/24), issue [#25](https://github.com/Westlake-AI/openmixup/issues/25), issue [#31](https://github.com/Westlake-AI/openmixup/issues/31), and issue [#33](https://github.com/Westlake-AI/openmixup/issues/33)). Update the official implementation of [MogaNet](https://arxiv.org/abs/2211.03295).

[2022-09-14] `OpenMixup` v0.2.6 is released (issue [#20](https://github.com/Westlake-AI/openmixup/issues/20)).

## Introduction

The main branch works with **PyTorch 1.8** (required by some self-supervised methods) or higher (we recommend **PyTorch 1.12**). You can still use **PyTorch 1.6** for supervised classification methods.
Expand Down Expand Up @@ -63,13 +56,18 @@ The main branch works with **PyTorch 1.8** (required by some self-supervised met
</ol>
</details>

<p align="right">(<a href="#top">back to top</a>)</p>
## News and Updates

[2022-12-16] `OpenMixup` v0.2.7 is released (issue [#35](https://github.com/Westlake-AI/openmixup/issues/35)).

[2022-12-02] Update new features and documents of `OpenMixup` v0.2.6 (issue [#24](https://github.com/Westlake-AI/openmixup/issues/24), issue [#25](https://github.com/Westlake-AI/openmixup/issues/25), issue [#31](https://github.com/Westlake-AI/openmixup/issues/31), and issue [#33](https://github.com/Westlake-AI/openmixup/issues/33)). Update the official implementation of [MogaNet](https://arxiv.org/abs/2211.03295).

[2022-09-14] `OpenMixup` v0.2.6 is released (issue [#20](https://github.com/Westlake-AI/openmixup/issues/20)).

## Installation

OpenMixup is compatible with **Python 3.7/3.8/3.9** and **PyTorch >= 1.8**. Here are installation steps for development:
OpenMixup is compatible with **Python 3.6/3.7/3.8/3.9** and **PyTorch >= 1.6**. Here are quick installation steps for development:

### From Source
```shell
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
Expand All @@ -79,63 +77,30 @@ git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop
```
### From PyPI
```shell
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip install openmim
mim install mmcv-full
pip install openmixup
cd openmixup
python setup.py develop
```

Please refer to [install.md](docs/en/install.md) for more detailed installation and dataset preparation instructions.
Please refer to [install.md](docs/en/install.md) for more detailed installation and dataset preparation.

## Getting Started

OpenMixup supports Linux, macOS and Windows. It enables easy implementation and extensions of mixup data augmentation methods in existing supervised, self-, and semi-supervised visual recognition models. Please see [get_started.md](docs/en/get_started.md) for the basic usage of OpenMixup.
OpenMixup supports Linux and macOS. It enables easy implementation and extensions of mixup data augmentation methods in existing supervised, self-, and semi-supervised visual recognition models. Please see [get_started.md](docs/en/get_started.md) for the basic usage of OpenMixup.

### Quick Start
This is an example of how to quickly set up the OpenMixup on your device. You can get a local copy up by running the belowing example steps.
#### Step0: Create your environment
```shell
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
```
#### Step1: Install the required packages
```shell
pip install -U openmim
mim install mmcv-full
```
### Training and Evaluation Scripts

#### Step2: Clone and develop the project
```shell
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop
```
Now you can use the image you just built for your own project.

### Training Script

Here, we provide example scripts for you to quickly start the accelerated end-to-end multiple GPUs training with specified `CONFIG_FILE`.
Here, we provide scripts for starting a quick end-to-end training with multiple `GPUs` and the specified `CONFIG_FILE`.
```shell
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]
```
To be more specific, you can run the script below to train a designated mixup CIFAR100 classification algorithm with 4 GPUs:
For example, you can run the script below to train a ResNet-50 classifier on ImageNet with 4 GPUs:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_train.sh openmixup\configs\classification\cifar100\mixups\basic\r18_mixups_CE_none.py 4
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash tools/dist_train.sh configs/classification/imagenet/resnet/resnet50_4xb64_cos_ep100.py 4
```

### Evaluation Script
After trianing, you can test the trained models with the corresponding evaluation script. An example with 4 GPUs evaluation is as follows:

After trianing, you can test the trained models with the corresponding evaluation script:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 bash tools/dist_test.sh ${CONFIG_FILE} ${GPUS} ${PATH_TO_MODEL} [optional arguments]
bash tools/dist_test.sh ${CONFIG_FILE} ${GPUS} ${PATH_TO_MODEL} [optional arguments]
```

### Develop
### Development

Please see [Tutorials](docs/en/tutorials) for more developing examples and tech details:

- [config files](docs/en/tutorials/0_config.md)
Expand Down Expand Up @@ -276,7 +241,7 @@ This project is released under the [Apache 2.0 license](LICENSE). See `LICENSE`

## Citation

If you find this project useful in your research, please consider cite our GitHub repo or [tech report](https://arxiv.org/abs/2209.04851):
If you find this project useful in your research, please consider star our GitHub repo and cite [tech report](https://arxiv.org/abs/2209.04851):

```BibTeX
@misc{2022openmixup,
Expand All @@ -301,14 +266,12 @@ If you find this project useful in your research, please consider cite our GitHu

## Contributors and Contact

For now, the direct contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)), Zicheng Liu ([@pone7](https://github.com/pone7)), Di Wu ([@wudi-bu](https://github.com/wudi-bu)), Tengfei Wang ([@wang-tf](https://github.com/wang-tf)), and Minglong Liu ([@minhlong94](https://github.com/minhlong94)). We thank contributors from MMSelfSup and MMClassification and all public contributors!
For help, new features, or reporting bugs associated with OpenMixup, please open a [GitHub issue](https://github.com/Westlake-AI/openmixup/issues) and [pull request](https://github.com/Westlake-AI/openmixup/pulls) with the tag "help wanted" or "enhancement". For now, the direct contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)), and Zicheng Liu ([@pone7](https://github.com/pone7)). We thank all public contributors and contributors from MMSelfSup and MMClassification!

This repo is currently maintained by:

- Siyuan Li ([email protected]), Westlake University
- Zedong Wang ([email protected]), Westlake University
- Zicheng Liu ([email protected]), Westlake University

If you have suggestions that would make OpenMixup better, please fork the repo and create a pull request. It is also encouraged to open an issue with the tag "help wanted" or "enhancement". Don't forget to give our OpenMixup a star! Thanks again!

<p align="right">(<a href="#top">back to top</a>)</p>
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

# data
data = dict(imgs_per_gpu=128, workers_per_gpu=10)
sampler = "RepeatAugSampler" # the official repo uses `repeated_aug` for more stable training

# additional hooks
update_interval = 1 # 128 x 8gpus x 1 accumulates = bs1024
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
_base_ = [
'../../_base_/models/deit/deit_base_p16_sz224.py',
'../../_base_/datasets/imagenet/swin_sz224_8xbs128.py',
'../../_base_/default_runtime.py',
]

# data
data = dict(imgs_per_gpu=256, workers_per_gpu=12)
sampler = "RepeatAugSampler" # the official repo uses `repeated_aug` for more stable training

# additional hooks
update_interval = 1 # 256 x 8gpus x 1 accumulates = bs2048

# optimizer
optimizer = dict(
type='Adan',
lr=1.5e-2, # lr = 1.5e-2 / bs2048
weight_decay=0.02, eps=1e-8, betas=(0.98, 0.92, 0.99),
max_grad_norm=5.0,
paramwise_options={
'(bn|ln|gn)(\d+)?.(weight|bias)': dict(weight_decay=0.),
'norm': dict(weight_decay=0.),
'bias': dict(weight_decay=0.),
'cls_token': dict(weight_decay=0.),
'pos_embed': dict(weight_decay=0.),
})

# fp16
use_fp16 = True
fp16 = dict(type='mmcv', loss_scale='dynamic')
optimizer_config = dict(update_interval=update_interval)

# lr scheduler
lr_config = dict(
policy='CosineAnnealing',
by_epoch=False, min_lr=1e-8,
warmup='linear',
warmup_iters=60, warmup_by_epoch=True, # warmup 60 epochs.
warmup_ratio=1e-8,
)

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=150)
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
_base_ = './deit_base_adan_8xb256_fp16_ep150.py'

# runtime settings
runner = dict(type='EpochBasedRunner', max_epochs=300)
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

# data
data = dict(imgs_per_gpu=128, workers_per_gpu=10)
# sampler = "RepeatAugSampler" # this repo reproduce the performance without `repeated_aug`

# additional hooks
update_interval = 1 # 128 x 8gpus x 1 accumulates = bs1024
Expand Down
Original file line number Diff line number Diff line change
@@ -1,18 +1,9 @@
_base_ = [
'../../_base_/models/deit/deit_small_p16_sz224.py',
'../../_base_/datasets/imagenet/deit_adan_sz224_8xbs256.py',
'../../_base_/datasets/imagenet/swin_sz224_8xbs128.py',
'../../_base_/default_runtime.py',
]

# model settings
model = dict(
head=dict(
type='VisionTransformerClsHead', # mixup BCE + label smooth
loss=dict(type='LabelSmoothLoss',
label_smooth_val=0.1, num_classes=1000, mode='multi_label', loss_weight=1.0),
in_channels=384, num_classes=1000)
)

# data
data = dict(imgs_per_gpu=256, workers_per_gpu=12)

Expand All @@ -22,17 +13,17 @@
# optimizer
optimizer = dict(
type='Adan',
lr=1.5e-3, # lr = 1.5e-3 / bs2048
lr=1.5e-2, # lr = 1.5e-2 / bs2048
weight_decay=0.02, eps=1e-8, betas=(0.98, 0.92, 0.99),
max_grad_norm=0.0,
max_grad_norm=5.0,
paramwise_options={
'(bn|ln|gn)(\d+)?.(weight|bias)': dict(weight_decay=0.),
'norm': dict(weight_decay=0.),
'bias': dict(weight_decay=0.),
'cls_token': dict(weight_decay=0.),
'pos_embed': dict(weight_decay=0.),
})

# fp16
use_fp16 = True
fp16 = dict(type='mmcv', loss_scale='dynamic')
Expand Down
Loading

0 comments on commit baece8e

Please sign in to comment.