diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md
deleted file mode 100644
index 92afad1c5a..0000000000
--- a/.github/CODE_OF_CONDUCT.md
+++ /dev/null
@@ -1,76 +0,0 @@
-# Contributor Covenant Code of Conduct
-
-## Our Pledge
-
-In the interest of fostering an open and welcoming environment, we as
-contributors and maintainers pledge to making participation in our project and
-our community a harassment-free experience for everyone, regardless of age, body
-size, disability, ethnicity, sex characteristics, gender identity and expression,
-level of experience, education, socio-economic status, nationality, personal
-appearance, race, religion, or sexual identity and orientation.
-
-## Our Standards
-
-Examples of behavior that contributes to creating a positive environment
-include:
-
-- Using welcoming and inclusive language
-- Being respectful of differing viewpoints and experiences
-- Gracefully accepting constructive criticism
-- Focusing on what is best for the community
-- Showing empathy towards other community members
-
-Examples of unacceptable behavior by participants include:
-
-- The use of sexualized language or imagery and unwelcome sexual attention or
-  advances
-- Trolling, insulting/derogatory comments, and personal or political attacks
-- Public or private harassment
-- Publishing others' private information, such as a physical or electronic
-  address, without explicit permission
-- Other conduct which could reasonably be considered inappropriate in a
-  professional setting
-
-## Our Responsibilities
-
-Project maintainers are responsible for clarifying the standards of acceptable
-behavior and are expected to take appropriate and fair corrective action in
-response to any instances of unacceptable behavior.
-
-Project maintainers have the right and responsibility to remove, edit, or
-reject comments, commits, code, wiki edits, issues, and other contributions
-that are not aligned to this Code of Conduct, or to ban temporarily or
-permanently any contributor for other behaviors that they deem inappropriate,
-threatening, offensive, or harmful.
-
-## Scope
-
-This Code of Conduct applies both within project spaces and in public spaces
-when an individual is representing the project or its community. Examples of
-representing a project or community include using an official project e-mail
-address, posting via an official social media account, or acting as an appointed
-representative at an online or offline event. Representation of a project may be
-further defined and clarified by project maintainers.
-
-## Enforcement
-
-Instances of abusive, harassing, or otherwise unacceptable behavior may be
-reported by contacting the project team at chenkaidev@gmail.com. All
-complaints will be reviewed and investigated and will result in a response that
-is deemed necessary and appropriate to the circumstances. The project team is
-obligated to maintain confidentiality with regard to the reporter of an incident.
-Further details of specific enforcement policies may be posted separately.
-
-Project maintainers who do not follow or enforce the Code of Conduct in good
-faith may face temporary or permanent repercussions as determined by other
-members of the project's leadership.
-
-## Attribution
-
-This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
-available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
-
-For answers to common questions about this code of conduct, see
-https://www.contributor-covenant.org/faq
-
-[homepage]: https://www.contributor-covenant.org
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
deleted file mode 100644
index 0814085f50..0000000000
--- a/.github/CONTRIBUTING.md
+++ /dev/null
@@ -1,61 +0,0 @@
-# Contributing to MMagic
-
-All kinds of contributions are welcome, including but not limited to the following.
-
-- Fix typo or bugs
-- Add documentation or translate the documentation into other languages
-- Add new features and components
-
-## Workflow
-
-1. fork and pull the latest MMagic repository (MMagic)
-2. checkout a new branch (do not use master branch for PRs)
-3. commit your changes
-4. create a PR
-
-```{note}
-If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
-```
-
-## Code style
-
-### Python
-
-We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
-
-We use the following tools for linting and formatting:
-
-- [flake8](https://github.com/PyCQA/flake8): A wrapper around some linter tools.
-- [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports.
-- [yapf](https://github.com/google/yapf): A formatter for Python files.
-- [codespell](https://github.com/codespell-project/codespell): A Python utility to fix common misspellings in text files.
-- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
-- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
-
-Style configurations can be found in [setup.cfg](/setup.cfg).
-
-We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
-fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
-The config for a pre-commit hook is stored in [.pre-commit-config](/.pre-commit-config.yaml).
-
-After you clone the repository, you will need to install initialize pre-commit hook.
-
-```shell
-pip install -U pre-commit
-```
-
-From the repository folder
-
-```shell
-pre-commit install
-```
-
-After this on every commit check code linters and formatter will be enforced.
-
-```{important}
-Before you create a PR, make sure that your code lints and is formatted by yapf.
-```
-
-### C++ and CUDA
-
-We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
diff --git a/.github/ISSUE_TEMPLATE/1-bug-report.yml b/.github/ISSUE_TEMPLATE/1-bug-report.yml
deleted file mode 100644
index fe523ad454..0000000000
--- a/.github/ISSUE_TEMPLATE/1-bug-report.yml
+++ /dev/null
@@ -1,105 +0,0 @@
-name: "🐞 Bug report"
-description: "Create a report to help us reproduce and fix the bug"
-labels: "kind/bug,status/unconfirmed"
-title: "[Bug] "
-
-body:
-  - type: markdown
-    attributes:
-      value: |
-        If you have already identified the reason, we strongly appreciate you creating a new PR to fix it [here](https://github.com/open-mmlab/mmagic/pulls)!
-        If this issue is about installing MMCV, please file an issue at [MMCV](https://github.com/open-mmlab/mmcv/issues/new/choose).
-        If you need our help, please fill in as much of the following form as you're able to.
-
-        **The less clear the description, the longer it will take to solve it.**
-
-  - type: checkboxes
-    attributes:
-      label: Prerequisite
-      description: Please check the following items before creating a new issue.
-      options:
-      - label: I have searched [Issues](https://github.com/open-mmlab/mmagic/issues) and [Discussions](https://github.com/open-mmlab/mmagic/discussions) but cannot get the expected help.
-        required: true
-      - label: I have read the [FAQ documentation](https://mmagic.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
-        required: true
-      - label: The bug has not been fixed in the [latest version (main)](https://github.com/open-mmlab/mmagic) or [latest version (0.x)](https://github.com/open-mmlab/mmagic/tree/0.x).
-        required: true
-
-  - type: dropdown
-    id: task
-    attributes:
-      label: Task
-      description: The problem arises when
-      options:
-        - I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
-        - I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
-    validations:
-      required: true
-
-  - type: dropdown
-    id: branch
-    attributes:
-      label: Branch
-      description: The problem arises when I'm working on
-      options:
-        - main branch https://github.com/open-mmlab/mmagic
-        - 0.x branch https://github.com/open-mmlab/mmagic/tree/0.x
-    validations:
-      required: true
-
-
-  - type: textarea
-    attributes:
-      label: Environment
-      description: |
-        Please run `python mmagic/utils/collect_env.py` to collect necessary environment information and copy-paste it here.
-        You may add additional information that may be helpful for locating the problem, such as
-          - How you installed PyTorch \[e.g., pip, conda, source\]
-          - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
-    validations:
-      required: true
-
-  - type: textarea
-    attributes:
-      label: Reproduces the problem - code sample
-      description: |
-        Please provide a code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
-      placeholder: |
-        ```python
-        # Sample code to reproduce the problem
-        ```
-    validations:
-      required: true
-
-  - type: textarea
-    attributes:
-      label: Reproduces the problem - command or script
-      description: |
-        What command or script did you run?
-      placeholder: |
-        ```shell
-        The command or script you run.
-        ```
-    validations:
-      required: true
-
-  - type: textarea
-    attributes:
-      label: Reproduces the problem - error message
-      description: |
-        Please provide the error message or logs you got, with the full traceback.
-      placeholder: |
-        ```
-        The error message or logs you got, with the full traceback.
-        ```
-    validations:
-      required: true
-
-  - type: textarea
-    attributes:
-      label: Additional information
-      description: Tell us anything else you think we should know.
-      placeholder: |
-        1. What's your expected result?
-        2. What dataset did you use?
-        3. What do you think might be the reason?
diff --git a/.github/ISSUE_TEMPLATE/2-feature-request.yml b/.github/ISSUE_TEMPLATE/2-feature-request.yml
deleted file mode 100644
index ff273f40b7..0000000000
--- a/.github/ISSUE_TEMPLATE/2-feature-request.yml
+++ /dev/null
@@ -1,31 +0,0 @@
-name: 🚀 Feature request
-description: Suggest an idea for this project
-labels: "kind/enhancement,status/unconfirmed"
-title: "[Feature] "
-
-body:
-  - type: markdown
-    attributes:
-      value: |
-        We strongly appreciate you creating a PR to implement this feature [here](https://github.com/open-mmlab/mmagic/pulls)!
-        If you need our help, please fill in as much of the following form as you're able to.
-
-        **The less clear the description, the longer it will take to solve it.**
-
-  - type: textarea
-    attributes:
-      label: What's the feature?
-      description: |
-        Tell us more about the feature and how this feature can help.
-      placeholder: |
-        E.g., It is inconvenient when \[....\].
-        This feature can \[....\].
-    validations:
-      required: true
-
-  - type: textarea
-    attributes:
-      label: Any other context?
-      description: |
-        Have you considered any alternative solutions or features? If so, what are they?
-        Also, feel free to add any other context or screenshots about the feature request here.
diff --git a/.github/ISSUE_TEMPLATE/3-new-model.yml b/.github/ISSUE_TEMPLATE/3-new-model.yml
deleted file mode 100644
index 2346685ea0..0000000000
--- a/.github/ISSUE_TEMPLATE/3-new-model.yml
+++ /dev/null
@@ -1,32 +0,0 @@
-name: "\U0001F31F New model/dataset/scheduler addition"
-description: Submit a proposal/request to implement a new model / dataset / scheduler
-labels: "kind/feature,status/unconfirmed"
-title: "[New Models] "
-
-
-body:
-  - type: textarea
-    id: description-request
-    validations:
-      required: true
-    attributes:
-      label: Model/Dataset/Scheduler description
-      description: |
-        Put any and all important information relative to the model/dataset/scheduler
-
-  - type: checkboxes
-    attributes:
-      label: Open source status
-      description: |
-          Please provide the open-source status, which would be very helpful
-      options:
-        - label: "The model implementation is available"
-        - label: "The model weights are available."
-
-  - type: textarea
-    id: additional-info
-    attributes:
-      label: Provide useful links for the implementation
-      description: |
-        Please provide information regarding the implementation, the weights, and the authors.
-        Please mention the authors by @gh-username if you're aware of their usernames.
diff --git a/.github/ISSUE_TEMPLATE/4-documentation.yml b/.github/ISSUE_TEMPLATE/4-documentation.yml
deleted file mode 100644
index a558f60f36..0000000000
--- a/.github/ISSUE_TEMPLATE/4-documentation.yml
+++ /dev/null
@@ -1,34 +0,0 @@
-name: 📚 Documentation
-description: Report an issue related to the documentation.
-labels: "kind/doc,status/unconfirmed"
-title: "[Docs] "
-
-body:
-- type: dropdown
-  id: branch
-  attributes:
-    label: Branch
-    description: This issue is related to the
-    options:
-      - main branch  https://mmagic.readthedocs.io/en/latest/
-      - 0.x branch https://mmagic.readthedocs.io/en/0.x/
-  validations:
-    required: true
-
-- type: textarea
-  attributes:
-    label: 📚 The doc issue
-    description: >
-      A clear and concise description the issue.
-  validations:
-    required: true
-
-- type: textarea
-  attributes:
-    label: Suggest a potential alternative/fix
-    description: >
-      Tell us how we could improve the documentation in this regard.
-- type: markdown
-  attributes:
-    value: >
-      Thanks for contributing 🎉!
diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml
deleted file mode 100644
index 1192d78314..0000000000
--- a/.github/ISSUE_TEMPLATE/config.yml
+++ /dev/null
@@ -1,9 +0,0 @@
-blank_issues_enabled: false
-
-contact_links:
-  - name: 💬 Forum
-    url: https://github.com/open-mmlab/mmagic/discussions
-    about: Ask general usage questions and discuss with other MMagic community members
-  - name: 🌐 Explore OpenMMLab
-    url: https://openmmlab.com/
-    about: Get to know more about OpenMMLab
diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md
deleted file mode 100644
index 8397dca71d..0000000000
--- a/.github/pull_request_template.md
+++ /dev/null
@@ -1,35 +0,0 @@
-Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
-
-## Motivation
-
-Please describe the motivation of this PR and the goal you want to achieve through this PR.
-
-## Modification
-
-Please briefly describe what modification is made in this PR.
-
-## BC-breaking (Optional)
-
-Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
-If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
-
-## Use cases (Optional)
-
-If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
-
-## Checklist
-
-Submitting this pull request means that,
-
-**Before PR**:
-
-- [x] I have read and followed the workflow indicated in the [CONTRIBUTING.md](https://github.com/open-mmlab/mmagic/blob/main/.github/CONTRIBUTING.md) to create this PR.
-- [x] Pre-commit or linting tools indicated in [CONTRIBUTING.md](https://github.com/open-mmlab/mmagic/blob/main/.github/CONTRIBUTING.md) are used to fix the potential lint issues.
-- [x] Bug fixes are covered by unit tests, the case that causes the bug should be added in the unit tests.
-- [x] New functionalities are covered by complete unit tests. If not, please add more unit test to ensure the correctness.
-- [x] The documentation has been modified accordingly, including docstring or example tutorials.
-
-**After PR**:
-
-- [x] If the modification has potential influence on downstream or other related projects, this PR should be tested with some of those projects.
-- [x] CLA has been signed and all committers have signed the CLA in this PR.
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
new file mode 100644
index 0000000000..ba88e56d9a
--- /dev/null
+++ b/.github/workflows/ci.yml
@@ -0,0 +1,27 @@
+name: ci 
+on:
+  push:
+    branches:
+      - master 
+      - main
+permissions:
+  contents: write
+jobs:
+  deploy:
+    runs-on: ubuntu-latest
+    steps:
+      - uses: actions/checkout@v3
+      - uses: actions/setup-python@v4
+        with:
+          python-version: 3.x
+      - uses: actions/cache@v3
+        with:
+          key: mkdocs-material-${{ github.ref }} 
+          path: .cache
+          restore-keys: |
+            mkdocs-material-
+      - run: pip install mkdocs-material 
+      - run: pip install mkdocs-jupyter
+      - run: pip install jieba
+      - run: pip install mkdocs-git-revision-date-localized-plugin
+      - run: mkdocs gh-deploy --force
\ No newline at end of file
diff --git a/.github/workflows/lint.yml b/.github/workflows/lint.yml
deleted file mode 100644
index 1db59bbe6f..0000000000
--- a/.github/workflows/lint.yml
+++ /dev/null
@@ -1,27 +0,0 @@
-name: lint
-
-on: [push, pull_request]
-
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-
-jobs:
-  lint:
-    runs-on: ubuntu-latest
-    steps:
-      - uses: actions/checkout@v2
-      - name: Set up Python 3.7
-        uses: actions/setup-python@v2
-        with:
-          python-version: 3.7
-      - name: Install pre-commit hook
-        run: |
-          pip install pre-commit
-          pre-commit install
-      - name: Linting
-        run: pre-commit run --all-files
-      - name: Check docstring coverage
-        run: |
-          pip install interrogate
-          interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-regex "__repr__" --fail-under 90 mmagic
diff --git a/.github/workflows/merge_stage_test.yml b/.github/workflows/merge_stage_test.yml
deleted file mode 100644
index 0dd80a7274..0000000000
--- a/.github/workflows/merge_stage_test.yml
+++ /dev/null
@@ -1,227 +0,0 @@
-name: merge_stage_test
-
-on:
-  push:
-    paths-ignore:
-      - 'README.md'
-      - 'README_zh-CN.md'
-      - 'docs/**'
-      - '.owners.yml'
-      - '.github/ISSUE_TEMPLATE/**'
-      - '.github/*.md'
-      - '.dev_scripts/**'
-      - '.circleci/**'
-      - 'configs/**'
-      - 'projects/**'
-
-    branches:
-      - dev-1.x
-      - test-1.x
-      - main
-      - test-branch
-
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-
-jobs:
-  build_cpu_py:
-    runs-on: ubuntu-22.04
-    strategy:
-      matrix:
-        python-version: [3.8, 3.9]
-        torch: [1.8.1]
-        include:
-          - torch: 1.8.1
-            torchvision: 0.9.1
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Install PyTorch
-        run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
-      - name: Install MMEngine
-        run: pip install git+https://github.com/open-mmlab/mmengine.git@main
-      - name: Install MMCV
-        run: |
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-      - name: Install other dependencies
-        run: |
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: rm -rf .eggs && pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          coverage run --branch --source mmagic -m pytest tests/
-          coverage xml
-          coverage report -m
-
-  build_cpu_pt:
-    runs-on: ubuntu-22.04
-    strategy:
-      matrix:
-        python-version: [3.7]
-        torch: [1.8.1, 1.9.1, 1.10.1, 1.11.0, 1.12.1, 1.13.0]
-        include:
-          - torch: 1.8.1
-            torchvision: 0.9.1
-          - torch: 1.9.1
-            torchvision: 0.10.1
-          - torch: 1.10.1
-            torchvision: 0.11.2
-          - torch: 1.11.0
-            torchvision: 0.12.0
-          - torch: 1.12.1
-            torchvision: 0.13.1
-          - torch: 1.13.0
-            torchvision: 0.14.0
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Install PyTorch
-        run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html
-      - name: Install MMEngine
-        run: pip install git+https://github.com/open-mmlab/mmengine.git@main
-      - name: Install MMCV
-        run: |
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-      - name: Install other dependencies
-        run: |
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: rm -rf .eggs && pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          coverage run --branch --source mmagic -m pytest tests/
-          coverage xml --omit="**/stylegan3_ops/*,**/conv2d_gradfix.py,**/grid_sample_gradfix.py,**/misc.py,**/upfirdn2d.py,**all_gather_layer.py"
-          coverage report -m
-      # Only upload coverage report for python3.7 && pytorch1.8.1 cpu
-      - name: Upload coverage to Codecov
-        if: ${{matrix.torch == '1.8.1' && matrix.python-version == '3.7'}}
-        uses: codecov/codecov-action@v1.0.14
-        with:
-          file: ./coverage.xml
-          flags: unittests
-          env_vars: OS,PYTHON
-          name: codecov-umbrella
-          fail_ci_if_error: false
-
-  build_cu102:
-    runs-on: ubuntu-22.04
-    container:
-      image: pytorch/pytorch:1.8.1-cuda10.2-cudnn7-devel
-    strategy:
-      matrix:
-        python-version: [3.7]
-        include:
-          - torch: 1.8.1
-            cuda: 10.2
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Fetch GPG keys
-        run: |
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2204/x86_64/7fa2af80.pub
-      - name: Install system dependencies
-        run: |
-          apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6
-      - name: Install PyTorch
-        run: pip install torch==1.8.1+cpu torchvision==0.9.1+cpu -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
-      - name: Install mmagic dependencies
-        run: |
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: |
-          pip install -e .
-
-  build_cu116:
-    runs-on: ubuntu-22.04
-    container:
-      image: pytorch/pytorch:1.13.0-cuda11.6-cudnn8-devel
-    strategy:
-      matrix:
-        python-version: [3.7]
-        include:
-          - torch: 1.8.1
-            cuda: 10.2
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Fetch GPG keys
-        run: |
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2204/x86_64/7fa2af80.pub
-      - name: Install system dependencies
-        run: |
-          apt-get update && apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6
-      - name: Install PyTorch
-        run: pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cpu
-      - name: Install mmagic dependencies
-        run: |
-          pip install git+https://github.com/open-mmlab/mmengine.git@main
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: |
-          pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          coverage run --branch --source mmagic -m pytest tests/
-          coverage xml --omit="**/stylegan3_ops/*,**/conv2d_gradfix.py,**/grid_sample_gradfix.py,**/misc.py,**/upfirdn2d.py,**all_gather_layer.py"
-          coverage report -m
-
-  build_windows:
-    runs-on: windows-2022
-    strategy:
-      matrix:
-        python: [3.7]
-        platform: [cpu, cu111]
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: python -m pip install pip --upgrade && pip install wheel
-      - name: Install lmdb
-        run: python -m pip install lmdb
-      - name: Install PyTorch
-        run: python -m pip install torch==1.8.1+${{matrix.platform}} torchvision==0.9.1+${{matrix.platform}} -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
-      - name: Install mmagic dependencies
-        run: |
-          python -m pip install git+https://github.com/open-mmlab/mmengine.git@main
-          python -m pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-          python -m pip install -r requirements/tests.txt
-      - name: Build and install
-        run: |
-          python -m pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          pytest tests/
diff --git a/.github/workflows/pr_stage_test.yml b/.github/workflows/pr_stage_test.yml
deleted file mode 100644
index daeae03b8b..0000000000
--- a/.github/workflows/pr_stage_test.yml
+++ /dev/null
@@ -1,139 +0,0 @@
-name: pr_stage_test
-
-on:
-  pull_request:
-    paths-ignore:
-      - 'README.md'
-      - 'README_zh-CN.md'
-      - '.owners.yml'
-      - '.github/ISSUE_TEMPLATE/**'
-      - '.github/*.md'
-      - 'docs/**'
-      - 'projects/**'
-      - '.dev_scripts/**'
-      - '.circleci/**'
-      - 'configs/**'
-
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-
-jobs:
-  build_cpu:
-    runs-on: ubuntu-22.04
-    strategy:
-      matrix:
-        python-version: [3.8]
-        include:
-          - torch: 2.0.1
-            torchvision: 0.15.2
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Install PyTorch
-        run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
-      - name: Install MMEngine
-        run: pip install git+https://github.com/open-mmlab/mmengine.git@main
-      - name: Install MMCV
-        run: |
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-      - name: Install other dependencies
-        run: |
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: rm -rf .eggs && pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          coverage run --branch --source mmagic -m pytest tests/
-          coverage xml --omit="**/stylegan3_ops/*,**/conv2d_gradfix.py,**/grid_sample_gradfix.py,**/misc.py,**/upfirdn2d.py,**all_gather_layer.py"
-          coverage report -m
-      # Upload coverage report for python3.7 && pytorch1.8.1 cpu
-      - name: Upload coverage to Codecov
-        uses: codecov/codecov-action@v1.0.14
-        with:
-          file: ./coverage.xml
-          flags: unittests
-          env_vars: OS,PYTHON
-          name: codecov-umbrella
-          fail_ci_if_error: false
-      # - name: Setup tmate session
-      #   if: ${{ failure() }}
-      #   uses: mxschmitt/action-tmate@v3
-
-  build_cu118:
-    runs-on: ubuntu-22.04
-    container:
-      image: pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
-    strategy:
-      matrix:
-        python-version: [3.8]
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Fetch GPG keys
-        run: |
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/3bf863cc.pub
-          apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2204/x86_64/7fa2af80.pub
-      - name: Install system dependencies
-        run: |
-          apt-get update
-          DEBIAN_FRONTEND=noninteractive apt-get install -y ffmpeg libsm6 libxext6 git ninja-build libglib2.0-0 libxrender-dev
-      - name: Install PyTorch
-        run: pip install torch==2.0.1+cpu torchvision==0.15.2+cpu -f https://download.pytorch.org/whl/torch_stable.html
-      - name: Install mmagic dependencies
-        run: |
-          pip install git+https://github.com/open-mmlab/mmengine.git@main
-          pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-          pip install -r requirements/tests.txt
-      - name: Build and install
-        run: |
-          pip install -e .
-      # - name: Setup tmate session
-      #   if: ${{ failure() }}
-      #   uses: mxschmitt/action-tmate@v3
-
-  build_windows:
-    runs-on: windows-2022
-    strategy:
-      matrix:
-        python-version: [3.8]
-        platform: [cpu, cu118]
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: python -m pip install pip --upgrade && pip install wheel
-      - name: Install lmdb
-        run: python -m pip install lmdb
-      - name: Install PyTorch
-        run: python -m pip install torch==2.0.1+${{matrix.platform}} torchvision==0.15.2+${{matrix.platform}} -f https://download.pytorch.org/whl/torch_stable.html
-      - name: Install mmagic dependencies
-        run: |
-          python -m pip install git+https://github.com/open-mmlab/mmengine.git@main
-          python -m pip install -U openmim
-          mim install 'mmcv >= 2.0.0'
-          python -m pip install -r requirements/tests.txt
-      - name: Build and install
-        run: |
-          python -m pip install -e .
-      - name: Run unittests and generate coverage report
-        run: |
-          pytest tests/
-      # - name: Setup tmate session
-      #   if: ${{ failure() }}
-      #   uses: mxschmitt/action-tmate@v3
diff --git a/.github/workflows/publish-to-pypi.yml b/.github/workflows/publish-to-pypi.yml
deleted file mode 100644
index 69a364aa07..0000000000
--- a/.github/workflows/publish-to-pypi.yml
+++ /dev/null
@@ -1,27 +0,0 @@
-name: deploy
-
-on: push
-
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-
-jobs:
-  build-n-publish:
-    runs-on: ubuntu-latest
-    if: startsWith(github.event.ref, 'refs/tags')
-    steps:
-      - uses: actions/checkout@v2
-      - name: Set up Python 3.7
-        uses: actions/setup-python@v1
-        with:
-          python-version: 3.7
-      - name: Build MMagic
-        run: |
-          pip install torch==1.8.1+cpu torchvision==0.9.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
-          pip install wheel
-          python setup.py sdist bdist_wheel
-      - name: Publish distribution to PyPI
-        run: |
-          pip install twine
-          twine upload dist/* -u __token__ -p ${{ secrets.pypi_password }}
diff --git a/.github/workflows/test_mim.yml b/.github/workflows/test_mim.yml
deleted file mode 100644
index c030d4ef17..0000000000
--- a/.github/workflows/test_mim.yml
+++ /dev/null
@@ -1,44 +0,0 @@
-name: test-mim
-
-on:
-  push:
-    paths:
-      - 'model-index.yml'
-      - 'configs/**'
-
-  pull_request:
-    paths:
-      - 'model-index.yml'
-      - 'configs/**'
-
-concurrency:
-  group: ${{ github.workflow }}-${{ github.ref }}
-  cancel-in-progress: true
-
-jobs:
-  build_cpu:
-    runs-on: ubuntu-22.04
-    strategy:
-      matrix:
-        python-version: [3.7]
-        torch: [1.8.0]
-        include:
-          - torch: 1.8.0
-            torch_version: torch1.8
-            torchvision: 0.9.0
-    steps:
-      - uses: actions/checkout@v3
-      - name: Set up Python ${{ matrix.python-version }}
-        uses: actions/setup-python@v4
-        with:
-          python-version: ${{ matrix.python-version }}
-      - name: Upgrade pip
-        run: pip install pip --upgrade && pip install wheel
-      - name: Install PyTorch
-        run: pip install torch==${{matrix.torch}}+cpu torchvision==${{matrix.torchvision}}+cpu -f https://download.pytorch.org/whl/torch_stable.html
-      - name: Install openmim
-        run: pip install openmim
-      - name: Build and install
-        run: rm -rf .eggs && mim install -e .
-      - name: test commands of mim
-        run: mim search mmagic
diff --git a/.readthedocs.yml b/.readthedocs.yml
deleted file mode 100644
index 911ceb43ac..0000000000
--- a/.readthedocs.yml
+++ /dev/null
@@ -1,13 +0,0 @@
-version: 2
-
-formats: [pdf, epub]
-
-build:
-  os: ubuntu-22.04
-  tools:
-    python: "3.7"
-
-python:
-  install:
-    - requirements: requirements/docs.txt
-    - requirements: requirements/readthedocs.txt
diff --git a/README.md b/README.md
index 757f10ea0d..5dc09a09a3 100644
--- a/README.md
+++ b/README.md
@@ -1,19 +1,19 @@
 <div id="top" align="center">
-  <img src="docs/en/_static/image/mmagic-logo.png" width="500px"/>
+  <img src="docs/zh_cn/_static/image/mmagic-logo.png" width="500px"/>
   <div>&nbsp;</div>
   <div align="center">
     <font size="10"><b>M</b>ultimodal <b>A</b>dvanced, <b>G</b>enerative, and <b>I</b>ntelligent <b>C</b>reation (MMagic [em'mædʒɪk])</font>
   </div>
   <div>&nbsp;</div>
   <div align="center">
-    <b><font size="5">OpenMMLab website</font></b>
+    <b><font size="5">OpenMMLab 官网</font></b>
     <sup>
       <a href="https://openmmlab.com">
         <i><font size="4">HOT</font></i>
       </a>
     </sup>
     &nbsp;&nbsp;&nbsp;&nbsp;
-    <b><font size="5">OpenMMLab platform</font></b>
+    <b><font size="5">OpenMMLab 开放平台</font></b>
     <sup>
       <a href="https://platform.openmmlab.com">
         <i><font size="4">TRY IT OUT</font></i>
@@ -23,7 +23,7 @@
   <div>&nbsp;</div>
 
 [![PyPI](https://badge.fury.io/py/mmagic.svg)](https://pypi.org/project/mmagic/)
-[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmagic.readthedocs.io/en/latest/)
+[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmagic.readthedocs.io/zh_CN/latest/)
 [![badge](https://github.com/open-mmlab/mmagic/workflows/build/badge.svg)](https://github.com/open-mmlab/mmagic/actions)
 [![codecov](https://codecov.io/gh/open-mmlab/mmagic/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmagic)
 [![license](https://img.shields.io/github/license/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/blob/main/LICENSE)
@@ -31,14 +31,14 @@
 [![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmagic.svg)](https://github.com/open-mmlab/mmagic/issues)
 [![Open in OpenXLab](https://cdn-static.openxlab.org.cn/app-center/openxlab_demo.svg)](https://openxlab.org.cn/apps?search=mmagic)
 
-[📘Documentation](https://mmagic.readthedocs.io/en/latest/) |
-[🛠️Installation](https://mmagic.readthedocs.io/en/latest/get_started/install.html) |
-[📊Model Zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) |
-[🆕Update News](https://mmagic.readthedocs.io/en/latest/changelog.html) |
-[🚀Ongoing Projects](https://github.com/open-mmlab/mmagic/projects) |
-[🤔Reporting Issues](https://github.com/open-mmlab/mmagic/issues)
+[📘使用文档](https://mmagic.readthedocs.io/zh_CN/latest/) |
+[🛠️安装教程](https://mmagic.readthedocs.io/zh_CN/latest/get_started/install.html) |
+[📊模型库](https://mmagic.readthedocs.io/zh_CN/latest/model_zoo/overview.html) |
+[🆕更新记录](https://mmagic.readthedocs.io/zh_CN/latest/changelog.html) |
+[🚀进行中的项目](https://github.com/open-mmlab/mmagic/projects) |
+[🤔提出问题](https://github.com/open-mmlab/mmagic/issues)
 
-English | [简体中文](README_zh-CN.md)
+[English](README.md) | 简体中文
 
 </div>
 
@@ -56,25 +56,25 @@ English | [简体中文](README_zh-CN.md)
     <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
 </div>
 
-## 🚀 What's New <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
+## 🚀 最新进展 <a><img width="35" height="20" src="https://user-images.githubusercontent.com/12782558/212848161-5e783dd6-11e8-4fe0-bbba-39ffb77730be.png"></a>
 
-### New release [**MMagic v1.1.0**](https://github.com/open-mmlab/mmagic/releases/tag/v1.1.0) \[22/09/2023\]:
+### 最新的 [**MMagic v1.1.0**](https://github.com/open-mmlab/mmagic/releases/tag/v1.1.0) 版本已经在 \[22/09/2023\] 发布:
 
-- Support ViCo, a new SD personalization method. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/vico/README.md)
-- Support AnimateDiff, a popular text2animation method. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/animatediff/README.md)
-- Support SDXL(Stable Diffusion XL). [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/stable_diffusion_xl/README.md)
-- Support DragGAN implementation with MMagic. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/draggan/README.md)
-- Support FastComposer, a new multi-subject text-to-image generation method. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/fastcomposer/README.md)
+- 支持ViCo,一种新的个性化方法,用于SD(Style Disentanglement)。[点击查看](https://github.com/open-mmlab/mmagic/blob/main/configs/vico/README.md)
+- 支持AnimateDiff,一种流行的文本转动画方法。[点击查看](https://github.com/open-mmlab/mmagic/blob/main/configs/animatediff/README.md)
+- 支持SDXL(Stable Diffusion XL)方法。[点击查看](https://github.com/open-mmlab/mmagic/blob/main/configs/stable_diffusion_xl/README.md)
+- 支持DragGAN方法的实现,使用MMagic。[点击查看](https://github.com/open-mmlab/mmagic/blob/main/configs/draggan/README.md)
+- 支持FastComposer, 一种新的多主体文本生成图像方法。[点击查看](https://github.com/open-mmlab/mmagic/blob/main/configs/fastcomposer/README.md)
 
-We are excited to announce the release of MMagic v1.0.0 that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration).
+我们正式发布 MMagic v1.0.0 版本,源自 [MMEditing](https://github.com/open-mmlab/mmediting) 和 [MMGeneration](https://github.com/open-mmlab/mmgeneration)。
 
-After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.
+经过 OpenMMLab 2.0 框架的迭代更新以及与 MMGeneration 的合并,MMEditing 已经成为了一个支持基于 GAN 和 CNN 的底层视觉算法的强大工具。而今天,MMEditing 将更加拥抱生成式 AI(Generative AI),正式更名为 **MMagic**(**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation),致力于打造更先进、更全面的 AIGC 开源算法库。MMagic 将为广大研究者与 AIGC 爱好者们提供更加快捷灵活的实验支持,助力你的 AIGC 探索之旅。
 
-We highlight the following new features.
+以下是此次版本发布的重点新功能:
 
-**1. New Models**
+**1. 新算法**
 
-We support 11 new models in 4 new tasks.
+我们支持了4个新任务以及11个新算法。
 
 - Text2Image / Diffusion
   - ControlNet
@@ -94,108 +94,111 @@ We support 11 new models in 4 new tasks.
 
 **2. Magic Diffusion Model**
 
-For the Diffusion Model, we provide the following "magic" :
+针对 Diffusion Model,我们提供了以下“魔法”
 
-- Support image generation based on Stable Diffusion and Disco Diffusion.
-- Support Finetune methods such as Dreambooth and DreamBooth LoRA.
-- Support controllability in text-to-image generation using ControlNet.
-- Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.
-- Support video generation based on MultiFrame Render.
-- Support calling basic models and sampling strategies through DiffuserWrapper.
+- 支持基于 Stable Diffusion 与 Disco Diffusion 的图像生成.
+- 支持 Dreambooth 以及 DreamBooth LoRA 等 Finetune 方法.
+- 支持 ControlNet 进行可控性的文本到图像生成.
+- 支持 xFormers 加速和优化策略,提高训练与推理效率.
+- 支持基于 MultiFrame Render 的视频生成.
+- 支持通过 Wrapper 调用 Diffusers 的基础模型以及采样策略.
 
-**3. Upgraded Framework**
+**3. 框架升级**
 
-By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic has upgraded in the following new features:
+通过 OpenMMLab 2.0 框架的 MMEngine 和 MMCV, MMagic 在以下几方面完成升级:
 
-- Refactor DataSample to support the combination and splitting of batch dimensions.
-- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
-- Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.
-- Support visualization on local files or using tensorboard and wandb.
-- Support for 33+ algorithms accelerated by Pytorch 2.0.
+- 重构 DataSample,支持 batch 维度的组合与拆分.
+- 重构 DataPreprocessor,并统一各种任务在训练与推理时的数据格式.
+- 重构 MultiValLoop 与 MultiTestLoop,同时支持生成类型指标(e.g. FID)与重建类型指标(e.g. SSIM) 的评测,同时支持一次性评测多个数据集
+- 支持本地可视化以及使用 tensorboard 或 wandb的可视化.
+- 支持 33+ 算法 Pytorch 2.0 加速.
 
-**MMagic** has supported all the tasks, models, metrics, and losses in [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration) and unifies interfaces of all components based on [MMEngine](https://github.com/open-mmlab/mmengine) 😍.
+**MMagic** 已经支持了[MMEditing](https://github.com/open-mmlab/mmediting)和[MMGeneration](https://github.com/open-mmlab/mmgeneration)中的全量任务、模型、优化函数和评价指标 ,并基于[MMEngine](https://github.com/open-mmlab/mmengine)统一了各组件接口 😍。
 
-Please refer to [changelog.md](docs/en/changelog.md) for details and release history.
-
-Please refer to [migration documents](docs/en/migration/overview.md) to migrate from [old version](https://github.com/open-mmlab/mmagic/tree/0.x) MMEditing 0.x to new version MMagic 1.x .
+如果想了解更多版本更新细节和历史信息,请阅读[更新日志](docs/zh_cn/changelog.md)。如果想从[旧版本](https://github.com/open-mmlab/mmagic/tree/master) MMEditing 0.x 迁移到新版本 MMagic 1.x,请阅读[迁移文档](docs/zh_cn/migration/overview.md)。
 
 <div id="table" align="center"></div>
 
-## 📄 Table of Contents
+## 📄 目录
 
-- [📖 Introduction](#-introduction)
-- [🙌 Contributing](#-contributing)
-- [🛠️ Installation](#️-installation)
-- [📊 Model Zoo](#-model-zoo)
-- [🤝 Acknowledgement](#-acknowledgement)
-- [🖊️ Citation](#️-citation)
-- [🎫 License](#-license)
-- [🏗️ ️OpenMMLab Family](#️-️openmmlab-family)
+- [� 最新进展 ](#-最新进展-)
+  - [最新的 **MMagic v1.1.0** 版本已经在 \[22/09/2023\] 发布:](#最新的-mmagic-v110-版本已经在-22092023-发布)
+- [📄 目录](#-目录)
+- [📖 介绍](#-介绍)
+  - [✨ 主要特性](#-主要特性)
+  - [✨ 最佳实践](#-最佳实践)
+- [🙌 参与贡献](#-参与贡献)
+- [🛠️ 安装](#️-安装)
+- [📊 模型库](#-模型库)
+- [🤝 致谢](#-致谢)
+- [🖊️ 引用](#️-引用)
+- [🎫 许可证](#-许可证)
+- [🏗️ ️OpenMMLab 的其他项目](#️-️openmmlab-的其他项目)
+- [欢迎加入 OpenMMLab 社区](#欢迎加入-openmmlab-社区)
 
-## 📖 Introduction
+## 📖 介绍
 
-MMagic (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation) is an advanced and comprehensive AIGC toolkit that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration). It is an open-source image and video editing&generating toolbox based on PyTorch. It is a part of the [OpenMMLab](https://openmmlab.com/) project.
+MMagic 是基于 PyTorch 的图像&视频编辑和生成开源工具箱。是 [OpenMMLab](https://openmmlab.com/) 项目的成员之一。
 
-Currently, MMagic support multiple image and video generation/editing tasks.
+目前 MMagic 支持多种图像和视频的生成/编辑任务。
 
 https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4
 
-### ✨ Major features
+### ✨ 主要特性
 
-- **State of the Art Models**
+- **SOTA 算法**
 
-  MMagic provides state-of-the-art generative models to process, edit and synthesize images and videos.
+  MMagic 提供了处理、编辑、生成图像和视频的 SOTA 算法。
 
-- **Powerful and Popular Applications**
+- **强有力且流行的应用**
 
-  MMagic supports popular and contemporary image restoration, text-to-image, 3D-aware generation, inpainting, matting, super-resolution and generation applications. Specifically, MMagic supports fine-tuning for stable diffusion and many exciting diffusion's application such as ControlNet Animation with SAM. MMagic also supports GAN interpolation, GAN projection, GAN manipulations and many other popular GAN’s applications. It’s time to begin your AIGC exploration journey!
+  MMagic 支持了流行的图像修复、图文生成、3D生成、图像修补、抠图、超分辨率和生成等任务的应用。特别是 MMagic 支持了 Stable Diffusion 的微调和许多激动人心的 diffusion 应用,例如 ControlNet 动画生成。MMagic 也支持了 GANs 的插值,投影,编辑和其他流行的应用。请立即开始你的 AIGC 探索之旅!
 
-- **Efficient Framework**
+- **高效的框架**
 
-  By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.
+  通过 OpenMMLab 2.0 框架的 MMEngine 和 MMCV, MMagic 将编辑框架分解为不同的组件,并且可以通过组合不同的模块轻松地构建自定义的编辑器模型。我们可以像搭建“乐高”一样定义训练流程,提供丰富的组件和策略。在 MMagic 中,你可以使用不同的 APIs 完全控制训练流程。得益于 [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), 动态模型结构的分布式训练可以轻松实现。
 
-### ✨ Best Practice
+### ✨ 最佳实践
 
-- The best practice on our main branch works with **Python 3.9+** and **PyTorch 2.0+**.
+- 主分支代码的最佳实践基于 **Python 3.9+** 和 **PyTorch 2.0+** 。
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#table">🔝返回目录</a></p>
 
-## 🙌 Contributing
+## 🙌 参与贡献
 
-More and more community contributors are joining us to make our repo better. Some recent projects are contributed by the community including:
+越来越多社区贡献者的加入使我们的算法库日益发展。最近由社区贡献的项目包括:
 
-- [SDXL](configs/stable_diffusion_xl/README.md) is contributed by  @okotaku.
-- [AnimateDiff](configs/animatediff/README.md) is contributed by @ElliotQi.
-- [ViCo](configs/vico/README.md) is contributed by @FerryHuang.
-- [DragGan](configs/draggan/README.md) is contributed by @qsun1.
-- [FastComposer](configs/fastcomposer/README.md) is contributed by @xiaomile.
+- [SDXL](configs/stable_diffusion_xl/README.md) 来自 @okotaku.
+- [AnimateDiff](configs/animatediff/README.md) 来自 @ElliotQi.
+- [ViCo](configs/vico/README.md) 来自 @FerryHuang.
+- [DragGan](configs/draggan/README.md) 来自 @qsun1.
+- [FastComposer](configs/fastcomposer/README.md) 来自 @xiaomile.
 
-[Projects](projects/README.md) is opened to make it easier for everyone to add projects to MMagic.
+为使向 MMagic 中添加项目更加容易,我们开启了 [Projects](projects/README.md) 。
 
-We appreciate all contributions to improve MMagic. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/main/CONTRIBUTING.md) in MMCV and [CONTRIBUTING.md](https://github.com/open-mmlab/mmengine/blob/main/CONTRIBUTING.md) in MMEngine for more details about the contributing guideline.
+感谢您为改善 MMagic 所做的所有贡献。请参阅 MMCV 中的 [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/main/CONTRIBUTING_zh-CN.md) 和 MMEngine 中的 [CONTRIBUTING.md](https://github.com/open-mmlab/mmengine/blob/main/CONTRIBUTING_zh-CN.md) 以获取贡献指南。
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#table">🔝返回目录</a></p>
 
-## 🛠️ Installation
+## 🛠️ 安装
 
-MMagic depends on [PyTorch](https://pytorch.org/), [MMEngine](https://github.com/open-mmlab/mmengine) and [MMCV](https://github.com/open-mmlab/mmcv).
-Below are quick steps for installation.
+MMagic 依赖 [PyTorch](https://pytorch.org/),[MMEngine](https://github.com/open-mmlab/mmengine) 和 [MMCV](https://github.com/open-mmlab/mmcv),以下是安装的简要步骤。
 
-**Step 1.**
-Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/).
+**步骤 1.**
+依照[官方教程](https://pytorch.org/get-started/locally/)安装 PyTorch 。
 
-**Step 2.**
-Install MMCV, MMEngine and MMagic with [MIM](https://github.com/open-mmlab/mim).
+**步骤 2.**
+使用 [MIM](https://github.com/open-mmlab/mim) 安装 MMCV,MMEngine 和 MMagic 。
 
-```shell
+```
 pip3 install openmim
-mim install mmcv>=2.0.0
-mim install mmengine
-mim install mmagic
+mim install 'mmcv>=2.0.0'
+mim install 'mmengine'
+mim install 'mmagic'
 ```
 
-**Step 3.**
-Verify MMagic has been successfully installed.
+**步骤 3.**
+验证 MMagic 安装成功。
 
 ```shell
 cd ~
@@ -203,9 +206,9 @@ python -c "import mmagic; print(mmagic.__version__)"
 # Example output: 1.0.0
 ```
 
-**Getting Started**
+**开始使用**
 
-After installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!
+成功安装 MMagic 后,你可以很容易地上手使用 MMagic!仅需几行代码,你就可以使用 MMagic 完成文本生成图像!
 
 ```python
 from mmagic.apis import MMagicInferencer
@@ -215,26 +218,26 @@ result_out_dir = 'output/sd_res.png'
 sd_inferencer.infer(text=text_prompts, result_out_dir=result_out_dir)
 ```
 
-Please see [quick run](docs/en/get_started/quick_run.md) and [inference](docs/en/user_guides/inference.md) for the basic usage of MMagic.
+请参考[快速运行](docs/zh_cn/get_started/quick_run.md)和[推理演示](docs/zh_cn/user_guides/inference.md)获取 MMagic 的基本用法。
 
-**Install MMagic from source**
+**从源码安装 MMagic**
 
-You can also experiment on the latest developed version rather than the stable release by installing MMagic from source with the following commands:
+使用以下命令从源码安装 MMagic,你可以选择不使用已发布的稳定版本,而在最新开发的版本上进行实验。
 
-```shell
+```
 git clone https://github.com/open-mmlab/mmagic.git
 cd mmagic
 pip3 install -e .
 ```
 
-Please refer to [installation](docs/en/get_started/install.md) for more detailed instruction.
+更详细的安装指南请参考 [安装指南](docs/zh_cn/get_started/install.md) 。
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#top">🔝Back to top</a></p>
 
-## 📊 Model Zoo
+## 📊 模型库
 
 <div align="center">
-  <b>Supported algorithms</b>
+  <b>支持的算法</b>
 </div>
 <table align="center">
   <tbody>
@@ -378,7 +381,7 @@ Please refer to [installation](docs/en/get_started/install.md) for more detailed
         <ul>
           <li><a href="configs/dim/README.md">DIM (CVPR'2017)</a></li>
           <li><a href="configs/indexnet/README.md">IndexNet (ICCV'2019)</a></li>
-          <li><a href="configs/gca/README.md">GCA (AAAI'2020)</a></li>
+          <li><a href="configs/mask2former">GCA (AAAI'2020)</a></li>
         </ul>
       </td>
       <td>
@@ -392,7 +395,6 @@ Please refer to [installation](docs/en/get_started/install.md) for more detailed
           <li><a href="projects/prompt_to_prompt/README.md">Prompt-to-Prompt (2022)</a></li>
           <li><a href="projects/prompt_to_prompt/README.md">Null-text Inversion (2022)</a></li>
           <li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
-          <li><a href="configs/controlnet_animation/README.md">ControlNet Animation (2023)</a></li>
           <li><a href="configs/stable_diffusion_xl/README.md">Stable Diffusion XL (2023)</a></li>
           <li><a href="configs/animatediff/README.md">AnimateDiff (2023)</a></li>
           <li><a href="configs/vico/README.md">ViCo (2023)</a></li>
@@ -410,25 +412,23 @@ Please refer to [installation](docs/en/get_started/install.md) for more detailed
   </tbody>
 </table>
 
-Please refer to [model_zoo](https://mmagic.readthedocs.io/en/latest/model_zoo/overview.html) for more details.
+请参考[模型库](https://mmagic.readthedocs.io/zh_CN/latest/model_zoo/overview.html)了解详情。
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#table">🔝返回目录</a></p>
 
-## 🤝 Acknowledgement
+## 🤝 致谢
 
-MMagic is an open source project that is contributed by researchers and engineers from various colleges and companies. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.
-
-We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. Thank you all!
+MMagic 是一款由不同学校和公司共同贡献的开源项目。我们感谢所有为项目提供算法复现和新功能支持的贡献者,以及提供宝贵反馈的用户。我们希望该工具箱和基准测试可以为社区提供灵活的代码工具,供用户复现现有算法并开发自己的新模型,从而不断为开源社区提供贡献。
 
 <a href="https://github.com/open-mmlab/mmagic/graphs/contributors">
   <img src="https://contrib.rocks/image?repo=open-mmlab/mmagic" />
 </a>
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#table">🔝返回目录</a></p>
 
-## 🖊️ Citation
+## 🖊️ 引用
 
-If MMagic is helpful to your research, please cite it as below.
+如果 MMagic 对您的研究有所帮助,请按照如下 bibtex 引用它。
 
 ```bibtex
 @misc{mmagic2023,
@@ -448,35 +448,53 @@ If MMagic is helpful to your research, please cite it as below.
 }
 ```
 
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
-
-## 🎫 License
-
-This project is released under the [Apache 2.0 license](LICENSE).
-Please refer to [LICENSES](LICENSE) for the careful check, if you are using our code for commercial matters.
-
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
-
-## 🏗️ ️OpenMMLab Family
-
-- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
-- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
-- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
-- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab Pre-training Toolbox and Benchmark.
-- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
-- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
-- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
-- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
-- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
-- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
-- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
-- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
-- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
-- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
-- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
-- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
-- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
-- [MMagic](https://github.com/open-mmlab/mmagic): OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox.
-- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
-
-<p align="right"><a href="#table">🔝Back to Table of Contents</a></p>
+<p align="right"><a href="#table">🔝返回目录</a></p>
+
+## 🎫 许可证
+
+本项目开源自 [Apache 2.0 license](LICENSE)。
+
+<p align="right"><a href="#table">🔝返回目录</a></p>
+
+## 🏗️ ️OpenMMLab 的其他项目
+
+- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab MMEngine.
+- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
+- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
+- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab 预训练工具箱
+- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
+- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
+- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
+- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
+- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具箱
+- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱
+- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
+- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
+- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
+- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
+- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
+- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
+- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
+- [MMagic](https://github.com/open-mmlab/mmagic): OpenMMLab 新一代人工智能内容生成(AIGC)工具箱
+- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架
+
+<p align="right"><a href="#table">🔝返回目录</a></p>
+
+## 欢迎加入 OpenMMLab 社区
+
+扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),扫描下方微信二维码添加喵喵好友,进入 MMagic 微信交流社群。【加好友申请格式:研究方向+地区+学校/公司+姓名】
+
+<div align="center">
+<img src="docs/zh_cn/_static/image/zhihu_qrcode.jpg" height="500" />  <img src="https://github.com/open-mmlab/mmagic/assets/62195058/0e80cbee-7b81-4648-8bc6-7a3585fa8476" height="500" />
+</div>
+
+我们会在 OpenMMLab 社区为大家
+
+- 📢 分享 AI 框架的前沿核心技术
+- 💻 解读 PyTorch 常用模块源码
+- 📰 发布 OpenMMLab 的相关新闻
+- 🚀 介绍 OpenMMLab 开发的前沿算法
+- 🏃 获取更高效的问题答疑和意见反馈
+- 🔥 提供与各行各业开发者充分交流的平台
+
+干货满满 📘,等你来撩 💗,OpenMMLab 社区期待您的加入 👬
\ No newline at end of file
diff --git a/docs/zh_cn/advanced_guides/data_flow.md b/docs/advanced_guides/data_flow.md
similarity index 100%
rename from docs/zh_cn/advanced_guides/data_flow.md
rename to docs/advanced_guides/data_flow.md
diff --git a/docs/zh_cn/advanced_guides/data_preprocessor.md b/docs/advanced_guides/data_preprocessor.md
similarity index 100%
rename from docs/zh_cn/advanced_guides/data_preprocessor.md
rename to docs/advanced_guides/data_preprocessor.md
diff --git a/docs/zh_cn/advanced_guides/evaluator.md b/docs/advanced_guides/evaluator.md
similarity index 100%
rename from docs/zh_cn/advanced_guides/evaluator.md
rename to docs/advanced_guides/evaluator.md
diff --git a/docs/zh_cn/advanced_guides/structures.md b/docs/advanced_guides/structures.md
similarity index 100%
rename from docs/zh_cn/advanced_guides/structures.md
rename to docs/advanced_guides/structures.md
diff --git a/docs/zh_cn/changelog.md b/docs/changelog.md
similarity index 100%
rename from docs/zh_cn/changelog.md
rename to docs/changelog.md
diff --git a/docs/zh_cn/community/contributing.md b/docs/community/contributing.md
similarity index 100%
rename from docs/zh_cn/community/contributing.md
rename to docs/community/contributing.md
diff --git a/docs/zh_cn/community/projects.md b/docs/community/projects.md
similarity index 100%
rename from docs/zh_cn/community/projects.md
rename to docs/community/projects.md
diff --git a/docs/zh_cn/device/npu_zh.md b/docs/device/npu_zh.md
similarity index 100%
rename from docs/zh_cn/device/npu_zh.md
rename to docs/device/npu_zh.md
diff --git a/docs/en/.dev_scripts/update_dataset_zoo.py b/docs/en/.dev_scripts/update_dataset_zoo.py
deleted file mode 100644
index 07060dab11..0000000000
--- a/docs/en/.dev_scripts/update_dataset_zoo.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-
-from tqdm import tqdm
-
-
-def update_dataset_zoo():
-
-    target_dir = 'dataset_zoo'
-    source_dir = '../../tools/dataset_converters'
-    os.makedirs(target_dir, exist_ok=True)
-
-    # generate overview
-    overviewmsg = """
-# Overview
-
-"""
-
-    # generate index.rst
-    rstmsg = """
-.. toctree::
-   :maxdepth: 1
-   :caption: Dataset Zoo
-
-   overview.md
-"""
-
-    subfolders = os.listdir(source_dir)
-    for subf in tqdm(subfolders, desc='update dataset zoo'):
-
-        target_subf = subf.replace('-', '_').lower()
-        target_readme = os.path.join(target_dir, target_subf + '.md')
-        source_readme = os.path.join(source_dir, subf, 'README.md')
-        if not os.path.exists(source_readme):
-            continue
-
-        overviewmsg += f'\n- [{subf}]({target_subf}.md)'
-        rstmsg += f'\n   {target_subf}.md'
-
-        # generate all tasks dataset_zoo
-        command = f'cat {source_readme} > {target_readme}'
-        os.popen(command)
-
-    with open(os.path.join(target_dir, 'overview.md'), 'w') as f:
-        f.write(overviewmsg)
-
-    with open(os.path.join(target_dir, 'index.rst'), 'w') as f:
-        f.write(rstmsg)
-
-
-if __name__ == '__main__':
-    update_dataset_zoo()
diff --git a/docs/en/.dev_scripts/update_model_zoo.py b/docs/en/.dev_scripts/update_model_zoo.py
deleted file mode 100755
index 5ba0c41de3..0000000000
--- a/docs/en/.dev_scripts/update_model_zoo.py
+++ /dev/null
@@ -1,121 +0,0 @@
-#!/usr/bin/env python
-import os
-from glob import glob
-from os import path as osp
-from pathlib import Path
-
-from modelindex.load_model_index import load
-from tqdm import tqdm
-
-MMAGIC_ROOT = Path(__file__).absolute().parents[3]
-TARGET_ROOT = Path(__file__).absolute().parents[1] / 'model_zoo'
-
-
-def write_file(file, content):
-    os.makedirs(osp.dirname(file), exist_ok=True)
-    with open(file, 'w', encoding='utf-8') as f:
-        f.write(content)
-
-
-def update_model_zoo():
-    """load collections and models from model index, return summary,
-    collections and models."""
-    model_index_file = MMAGIC_ROOT / 'model-index.yml'
-    model_index = load(str(model_index_file))
-    model_index.build_models_with_collections()
-
-    # parse model_index according to task
-    tasks = {}
-    full_models = set()
-    for model in model_index.models:
-        full_models.add(model.full_model)
-        for r in model.results:
-            _task = r.task.lower().split(', ')
-            for t in _task:
-                if t not in tasks:
-                    tasks[t] = set()
-                tasks[t].add(model.full_model)
-
-    # assert the number of configs with the number of files
-    collections = set([m.in_collection for m in full_models])
-    assert len(collections) == len(os.listdir(MMAGIC_ROOT / 'configs')) - 1
-
-    configs = set([str(MMAGIC_ROOT / m.config) for m in full_models])
-    base_configs = glob(
-        str(MMAGIC_ROOT / 'configs/_base_/**/*.py'), recursive=True)
-    all_configs = glob(str(MMAGIC_ROOT / 'configs/**/*.py'), recursive=True)
-    valid_configs = set(all_configs) - set(base_configs)
-    untrackable_configs = valid_configs - configs
-    assert len(untrackable_configs) == 0, '/n'.join(
-        list(untrackable_configs)) + ' are not trackable.'
-
-    # write for overview.md
-    papers = set()
-    checkpoints = set()
-    for m in full_models:
-        papers.add(m.paper['Title'])
-        if m.weights is not None and m.weights.startswith('https:'):
-            checkpoints.add(m.weights)
-    task_desc = '\n'.join([
-        f"  - [{t}]({t.replace('-', '_').replace(' ', '_')}.md)"
-        for t in list(tasks.keys())
-    ])
-
-    # write overview.md
-    overview = (f'# Overview\n\n'
-                f'* Number of checkpoints: {len(checkpoints)}\n'
-                f'* Number of configs: {len(configs)}\n'
-                f'* Number of papers: {len(papers)}\n'
-                f'  - ALGORITHM: {len(collections)}\n\n'
-                f'* Tasks:\n{task_desc}')
-    write_file(TARGET_ROOT / 'overview.md', overview)
-
-    # write for index.rst
-    task_desc = '\n'.join([
-        f"    {t.replace('-', '_').replace(' ', '_')}.md"
-        for t in list(tasks.keys())
-    ])
-    overview = (f'.. toctree::\n'
-                f'    :maxdepth: 1\n'
-                f'    :caption: Model Zoo\n\n'
-                f'    overview.md\n'
-                f'{task_desc}')
-    write_file(TARGET_ROOT / 'index.rst', overview)
-
-    # write for all the tasks
-    for task, models in tqdm(tasks.items(), desc='create markdown files'):
-        target_md = f"{task.replace('-', '_').replace(' ', '_')}.md"
-        target_md = TARGET_ROOT / target_md
-        models = sorted(models, key=lambda x: -x.data['Year'])
-
-        checkpoints = set()
-        for m in models:
-            if m.weights is not None and m.weights.startswith('https:'):
-                checkpoints.add(m.weights)
-        collections = set([m.in_collection for m in models])
-
-        papers = set()
-        for m in models:
-            papers.add(m.paper['Title'])
-
-        content = ''
-        readme = set()
-        for m in models:
-            if m.readme not in readme:
-                readme.add(m.readme)
-                with open(MMAGIC_ROOT / m.readme, 'r', encoding='utf-8') as f:
-                    c = f.read()
-                content += c.replace('# ', '## ')
-        overview = (f'# {task}\n\n'
-                    f'## Summary\n'
-                    f'* Number of checkpoints: {len(checkpoints)}\n'
-                    f'* Number of configs: {len(models)}\n'
-                    f'* Number of papers: {len(papers)}\n'
-                    f'  - ALGORITHM: {len(collections)}\n\n'
-                    f'{content}')
-
-        write_file(target_md, overview)
-
-
-if __name__ == '__main__':
-    update_model_zoo()
diff --git a/docs/en/.gitignore b/docs/en/.gitignore
deleted file mode 100644
index db69732497..0000000000
--- a/docs/en/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-model_zoo
-dataset_zoo
-autoapi
diff --git a/docs/en/Makefile b/docs/en/Makefile
deleted file mode 100644
index 56ae5906ce..0000000000
--- a/docs/en/Makefile
+++ /dev/null
@@ -1,23 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS    ?=
-SPHINXBUILD   ?= sphinx-build
-SOURCEDIR     = .
-BUILDDIR      = _build
-
-# Put it first so that "make" without argument is like "make help".
-help:
-	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
-	rm -rf _build
-	rm -rf model_zoo
-	rm -rf dataset_zoo
-	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/en/_static/css/readthedocs.css b/docs/en/_static/css/readthedocs.css
deleted file mode 100644
index 1f23cd07c1..0000000000
--- a/docs/en/_static/css/readthedocs.css
+++ /dev/null
@@ -1,10 +0,0 @@
-.header-logo {
-    background-image: url("../image/mmagic-logo.png");
-    background-size: 142px 46px;
-    height: 46px;
-    width: 142px;
-}
-
-table.colwidths-auto td {
-  width: 50%
-}
diff --git a/docs/en/_static/image/mmagic-logo.png b/docs/en/_static/image/mmagic-logo.png
deleted file mode 100644
index aefeff7ceb..0000000000
Binary files a/docs/en/_static/image/mmagic-logo.png and /dev/null differ
diff --git a/docs/en/_templates/404.html b/docs/en/_templates/404.html
deleted file mode 100644
index f7cb45c64c..0000000000
--- a/docs/en/_templates/404.html
+++ /dev/null
@@ -1,16 +0,0 @@
-{% extends "layout.html" %}
-
-{% block body %}
-
-<h1>Page Not Found</h1>
-<p>
-  Oops! The page you are looking for cannot be found.
-</p>
-<p>
-    This is likely to happen when you are switching document versions and the page you are reading is moved to another location in the new version. You can look for it in the content table left, or go to <a href="{{ pathto(root_doc) }}">the homepage</a>.
-</p>
-<p>
-  If you cannot find documentation you want, please <a href="https://github.com/open-mmlab/mmagic/issues/new/choose">open an issue</a> to tell us!
-</p>
-
-{% endblock %}
diff --git a/docs/en/_templates/python/attribute.rst b/docs/en/_templates/python/attribute.rst
deleted file mode 100644
index ebaba555ad..0000000000
--- a/docs/en/_templates/python/attribute.rst
+++ /dev/null
@@ -1 +0,0 @@
-{% extends "python/data.rst" %}
diff --git a/docs/en/_templates/python/class.rst b/docs/en/_templates/python/class.rst
deleted file mode 100644
index df5edffb62..0000000000
--- a/docs/en/_templates/python/class.rst
+++ /dev/null
@@ -1,58 +0,0 @@
-{% if obj.display %}
-.. py:{{ obj.type }}:: {{ obj.short_name }}{% if obj.args %}({{ obj.args }}){% endif %}
-{% for (args, return_annotation) in obj.overloads %}
-   {{ " " * (obj.type | length) }}   {{ obj.short_name }}{% if args %}({{ args }}){% endif %}
-{% endfor %}
-
-
-   {% if obj.bases %}
-   {% if "show-inheritance" in autoapi_options %}
-   Bases: {% for base in obj.bases %}{{ base|link_objs }}{% if not loop.last %}, {% endif %}{% endfor %}
-   {% endif %}
-
-
-   {% if "show-inheritance-diagram" in autoapi_options and obj.bases != ["object"] %}
-   .. autoapi-inheritance-diagram:: {{ obj.obj["full_name"] }}
-      :parts: 1
-      {% if "private-members" in autoapi_options %}
-      :private-bases:
-      {% endif %}
-
-   {% endif %}
-   {% endif %}
-   {% if obj.docstring %}
-   {{ obj.docstring|indent(3) }}
-   {% endif %}
-   {% if "inherited-members" in autoapi_options %}
-   {% set visible_classes = obj.classes|selectattr("display")|list %}
-   {% else %}
-   {% set visible_classes = obj.classes|rejectattr("inherited")|selectattr("display")|list %}
-   {% endif %}
-   {% for klass in visible_classes %}
-   {{ klass.render()|indent(3) }}
-   {% endfor %}
-   {% if "inherited-members" in autoapi_options %}
-   {% set visible_properties = obj.properties|selectattr("display")|list %}
-   {% else %}
-   {% set visible_properties = obj.properties|rejectattr("inherited")|selectattr("display")|list %}
-   {% endif %}
-   {% for property in visible_properties %}
-   {{ property.render()|indent(3) }}
-   {% endfor %}
-   {% if "inherited-members" in autoapi_options %}
-   {% set visible_attributes = obj.attributes|selectattr("display")|list %}
-   {% else %}
-   {% set visible_attributes = obj.attributes|rejectattr("inherited")|selectattr("display")|list %}
-   {% endif %}
-   {% for attribute in visible_attributes %}
-   {{ attribute.render()|indent(3) }}
-   {% endfor %}
-   {% if "inherited-members" in autoapi_options %}
-   {% set visible_methods = obj.methods|selectattr("display")|list %}
-   {% else %}
-   {% set visible_methods = obj.methods|rejectattr("inherited")|selectattr("display")|list %}
-   {% endif %}
-   {% for method in visible_methods %}
-   {{ method.render()|indent(3) }}
-   {% endfor %}
-{% endif %}
diff --git a/docs/en/_templates/python/data.rst b/docs/en/_templates/python/data.rst
deleted file mode 100644
index 89417f1e15..0000000000
--- a/docs/en/_templates/python/data.rst
+++ /dev/null
@@ -1,32 +0,0 @@
-{% if obj.display %}
-.. py:{{ obj.type }}:: {{ obj.name }}
-   {%+ if obj.value is not none or obj.annotation is not none -%}
-   :annotation:
-        {%- if obj.annotation %} :{{ obj.annotation }}
-        {%- endif %}
-        {%- if obj.value is not none %} = {%
-            if obj.value is string and obj.value.splitlines()|count > 1 -%}
-                Multiline-String
-
-    .. raw:: html
-
-        <details><summary>Show Value</summary>
-
-    .. code-block:: text
-        :linenos:
-
-        {{ obj.value|indent(width=8) }}
-
-    .. raw:: html
-
-        </details>
-
-            {%- else -%}
-                {{ obj.value|string|truncate(100) }}
-            {%- endif %}
-        {%- endif %}
-    {% endif %}
-
-
-   {{ obj.docstring|indent(3) }}
-{% endif %}
diff --git a/docs/en/_templates/python/exception.rst b/docs/en/_templates/python/exception.rst
deleted file mode 100644
index 92f3d38fd5..0000000000
--- a/docs/en/_templates/python/exception.rst
+++ /dev/null
@@ -1 +0,0 @@
-{% extends "python/class.rst" %}
diff --git a/docs/en/_templates/python/function.rst b/docs/en/_templates/python/function.rst
deleted file mode 100644
index b00d5c2445..0000000000
--- a/docs/en/_templates/python/function.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-{% if obj.display %}
-.. py:function:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %}
-
-{% for (args, return_annotation) in obj.overloads %}
-              {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %}
-
-{% endfor %}
-   {% for property in obj.properties %}
-   :{{ property }}:
-   {% endfor %}
-
-   {% if obj.docstring %}
-   {{ obj.docstring|indent(3) }}
-   {% endif %}
-{% endif %}
diff --git a/docs/en/_templates/python/method.rst b/docs/en/_templates/python/method.rst
deleted file mode 100644
index 723cb7bbe5..0000000000
--- a/docs/en/_templates/python/method.rst
+++ /dev/null
@@ -1,19 +0,0 @@
-{%- if obj.display %}
-.. py:method:: {{ obj.short_name }}({{ obj.args }}){% if obj.return_annotation is not none %} -> {{ obj.return_annotation }}{% endif %}
-
-{% for (args, return_annotation) in obj.overloads %}
-            {{ obj.short_name }}({{ args }}){% if return_annotation is not none %} -> {{ return_annotation }}{% endif %}
-
-{% endfor %}
-   {% if obj.properties %}
-   {% for property in obj.properties %}
-   :{{ property }}:
-   {% endfor %}
-
-   {% else %}
-
-   {% endif %}
-   {% if obj.docstring %}
-   {{ obj.docstring|indent(3) }}
-   {% endif %}
-{% endif %}
diff --git a/docs/en/_templates/python/module.rst b/docs/en/_templates/python/module.rst
deleted file mode 100644
index d2714f6c9d..0000000000
--- a/docs/en/_templates/python/module.rst
+++ /dev/null
@@ -1,114 +0,0 @@
-{% if not obj.display %}
-:orphan:
-
-{% endif %}
-:py:mod:`{{ obj.name }}`
-=========={{ "=" * obj.name|length }}
-
-.. py:module:: {{ obj.name }}
-
-{% if obj.docstring %}
-.. autoapi-nested-parse::
-
-   {{ obj.docstring|indent(3) }}
-
-{% endif %}
-
-{% block subpackages %}
-{% set visible_subpackages = obj.subpackages|selectattr("display")|list %}
-{% if visible_subpackages %}
-Subpackages
------------
-.. toctree::
-   :titlesonly:
-   :maxdepth: 3
-
-{% for subpackage in visible_subpackages %}
-   {{ subpackage.short_name }}/index.rst
-{% endfor %}
-
-
-{% endif %}
-{% endblock %}
-{% block submodules %}
-{% set visible_submodules = obj.submodules|selectattr("display")|list %}
-{% if visible_submodules %}
-Submodules
-----------
-.. toctree::
-   :titlesonly:
-   :maxdepth: 1
-
-{% for submodule in visible_submodules %}
-   {{ submodule.short_name }}/index.rst
-{% endfor %}
-
-
-{% endif %}
-{% endblock %}
-{% block content %}
-{% if obj.all is not none %}
-{% set visible_children = obj.children|selectattr("short_name", "in", obj.all)|list %}
-{% elif obj.type is equalto("package") %}
-{% set visible_children = obj.children|selectattr("display")|list %}
-{% else %}
-{% set visible_children = obj.children|selectattr("display")|rejectattr("imported")|list %}
-{% endif %}
-{% if visible_children %}
-{{ obj.type|title }} Contents
-{{ "-" * obj.type|length }}---------
-
-{% set visible_classes = visible_children|selectattr("type", "equalto", "class")|list %}
-{% set visible_functions = visible_children|selectattr("type", "equalto", "function")|list %}
-{% set visible_attributes = visible_children|selectattr("type", "equalto", "data")|list %}
-{% if "show-module-summary" in autoapi_options and (visible_classes or visible_functions) %}
-{% block classes scoped %}
-{% if visible_classes %}
-Classes
-~~~~~~~
-
-.. autoapisummary::
-
-{% for klass in visible_classes %}
-   {{ klass.id }}
-{% endfor %}
-
-
-{% endif %}
-{% endblock %}
-
-{% block functions scoped %}
-{% if visible_functions %}
-Functions
-~~~~~~~~~
-
-.. autoapisummary::
-
-{% for function in visible_functions %}
-   {{ function.id }}
-{% endfor %}
-
-
-{% endif %}
-{% endblock %}
-
-{% block attributes scoped %}
-{% if visible_attributes %}
-Attributes
-~~~~~~~~~~
-
-.. autoapisummary::
-
-{% for attribute in visible_attributes %}
-   {{ attribute.id }}
-{% endfor %}
-
-
-{% endif %}
-{% endblock %}
-{% endif %}
-{% for obj_item in visible_children %}
-{{ obj_item.render()|indent(0) }}
-{% endfor %}
-{% endif %}
-{% endblock %}
diff --git a/docs/en/_templates/python/package.rst b/docs/en/_templates/python/package.rst
deleted file mode 100644
index fb9a64965e..0000000000
--- a/docs/en/_templates/python/package.rst
+++ /dev/null
@@ -1 +0,0 @@
-{% extends "python/module.rst" %}
diff --git a/docs/en/_templates/python/property.rst b/docs/en/_templates/python/property.rst
deleted file mode 100644
index 70af24236f..0000000000
--- a/docs/en/_templates/python/property.rst
+++ /dev/null
@@ -1,15 +0,0 @@
-{%- if obj.display %}
-.. py:property:: {{ obj.short_name }}
-   {% if obj.annotation %}
-   :type: {{ obj.annotation }}
-   {% endif %}
-   {% if obj.properties %}
-   {% for property in obj.properties %}
-   :{{ property }}:
-   {% endfor %}
-   {% endif %}
-
-   {% if obj.docstring %}
-   {{ obj.docstring|indent(3) }}
-   {% endif %}
-{% endif %}
diff --git a/docs/en/advanced_guides/data_flow.md b/docs/en/advanced_guides/data_flow.md
deleted file mode 100644
index eb6932653b..0000000000
--- a/docs/en/advanced_guides/data_flow.md
+++ /dev/null
@@ -1,129 +0,0 @@
-# Data flow
-
-- [Data Flow](#data-flow)
-  - [Overview of dataflow](#overview-of-data-flow)
-  - [Data flow between dataset and model](#data-flow-between-dataset-and-model)
-    - [Data from dataloader](#data-from-dataloader)
-    - [Data from data preprocessor](#data-from-data-preprocessor)
-  - [Data flow between model output and visualizer](#data-flow-between-model-output-and-visualizer)
-
-## Overview of dataflow
-
-The [Runner](https://github.com/open-mmlab/mmengine/blob/main/docs/en/design/runner.md) is an "integrator" in MMEngine. It covers all aspects of the framework and shoulders the responsibility of organizing and scheduling nearly all modules, that means the dataflow between all modules also controlled by the `Runner`. As illustrated in the [Runner document of MMEngine](https://mmengine.readthedocs.io/en/latest/tutorials/runner.html), the following diagram shows the basic dataflow. In this chapter, we will introduce the dataflow and data format convention between the internal modules managed by the [Runner](https://mmengine.readthedocs.io/en/latest/tutorials/runner.html).
-
-<div align="center">
-<img src="https://github.com/open-mmlab/mmagic/assets/36404164/fc6ab53c-8804-416d-94cd-332c533a07ad" height="150" />
-</div>
-
-In the above diagram, at each training iteration, dataloader loads images from storage and transfer to data preprocessor, data preprocessor would put images to the specific device and stack data to batch, then model accepts the batch data as inputs, finally the outputs of the model would be compute the loss. Since model parameters are freezed when doing evaluation, the model output would be transferred to [Evaluator](./evaluation.md#ioumetric) to compute metrics or seed the data to visualize in [Visualizer](../user_guides/visualization.md).
-
-## Data flow between dataset and model
-
-In this section, we will introduce the data flow passing in the dataset in MMagic. About [dataset](https://mmagic.readthedocs.io/en/latest/howto/dataset.html) and \[transforms\] pipeline (https://mmagic.readthedocs.io/en/latest/howto/transforms.html) related tutorials can be found in the development of guidelines.The data flow between dataloader and model can be generally split into four parts:
-
-1. Read the original information of `XXDataset` collected datasets, and carry out data conversion processing through data transform pipeline;
-
-2. use `PackInputs` to pack data from previous transformations into a dictionar;
-
-3. use `collate_fn` to stack a list of tensors into a batched tensor;
-
-4. use `data preprocessor` to move all these data to target device, e.g. GPUS, and unzip the dictionary from the dataloader
-   into a tuple, containing the input images and meta info (`DataSample`).
-
-### Data from transform pipeline
-
-In MMagic, different types of 'XXDataset' load the data (LQ) and label (GT), and perform data transformation in different data preprocessing pipelines, and finally package the processed data into a dictionary through `PackInputs`, which contains all the data required for training and testing iterations.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> base_edit_model.py </th>
-    <th> base_conditional_gan.py </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-@MODELS.register_module()
-class BaseEditModel(BaseModel):
-    """Base model for image and video editing.
-    """
-    def forward(self,
-                inputs: torch.Tensor,
-                data_samples: Optional[List[DataSample]] = None,
-                mode: str = 'tensor',
-                **kwargs) -> Union[torch.Tensor, List[DataSample], dict]:
-        if isinstance(inputs, dict):
-            inputs = inputs['img']
-        if mode == 'tensor':
-            return self.forward_tensor(inputs, data_samples, **kwargs)
-
-        elif mode == 'predict':
-            predictions = self.forward_inference(inputs, data_samples,
-                                                 **kwargs)
-            predictions = self.convert_to_datasample(predictions, data_samples,
-                                                     inputs)
-            return predictions
-
-        elif mode == 'loss':
-            return self.forward_train(inputs, data_samples, **kwargs)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-@MODELS.register_module()
-class BaseConditionalGAN(BaseGAN):
-    """Base class for Conditional GAM models.
-    """
-    def forward(self,
-                inputs: ForwardInputs,
-                data_samples: Optional[list] = None,
-                mode: Optional[str] = None) -> List[DataSample]:
-        if isinstance(inputs, Tensor):
-            noise = inputs
-            sample_kwargs = {}
-        else:
-            noise = inputs.get('noise', None)
-            num_batches = get_valid_num_batches(inputs, data_samples)
-            noise = self.noise_fn(noise, num_batches=num_batches)
-            sample_kwargs = inputs.get('sample_kwargs', dict())
-        num_batches = noise.shape[0]
-
-        pass
-        ...
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-For example, in the `BaseEditModel` and `BaseConditionalGAN` models, key input including `img` and `noise` are required. At the same time, the corresponding fields should also be exposed in the configuration file,[cyclegan_lsgan-id0-resnet-in_1xb1-80kiters_facades.py](../../../configs/cyclegan/cyclegan_lsgan-id0-resnet-in_1xb1-80kiters_facades.py) as an example,
-
-### Data from dataloader
-
-After receiving a list of dictionary from dataset, `collect_fn` in dataloader will gather `inputs` in each dict
-and stack them into a batched tensor. In addition, `data_sample` in each dict will be also collected in a list.
-Then, it will output a dict, containing the same keys with those of the dict in the received list. Finally, dataloader
-will output the dict from the `collect_fn`. Detailed documentation can be reference [DATASET AND DATALOADER](https://mmengine.readthedocs.io/en/latest/tutorials/dataset.html)。
-
-### Data from data preprocessor
-
-Data preprocessor is the last step to process the data before feeding into the model. It will apply image normalization, convert BGR to RGB and move all data to the target device, e.g. GPUs. After above steps, it will output a tuple, containing a list of batched images, and a list of data samples. Detailed documentation can be reference [data_preprocessor](./data_preprocessor.md)。
-
-## Data flow between model output and visualizer
-
-MMEngine agreed [Abstract Data Element](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/data_element.md) for data transfer Where [data sample](./structures.md) as a more advanced encapsulation can hold more categories of label data. In MMagic, `ConcatImageVisualizer` for visual comparison also controls the visual content through the `add_datasample` function. The specific configuration is as follows.
-
-```python
-visualizer = dict(
-    type='ConcatImageVisualizer',
-    vis_backends=[dict(type='LocalVisBackend')],
-    fn_key='gt_path',
-    img_keys=['gt_img', 'input', 'pred_img'],
-    bgr2rgb=True)
-```
diff --git a/docs/en/advanced_guides/data_preprocessor.md b/docs/en/advanced_guides/data_preprocessor.md
deleted file mode 100644
index 9a24d865f1..0000000000
--- a/docs/en/advanced_guides/data_preprocessor.md
+++ /dev/null
@@ -1,5 +0,0 @@
-# Data pre-processor \[Coming Soon!\]
-
-We're improving this documentation. Don't hesitate to join us!
-
-[Make a pull request](https://github.com/open-mmlab/mmagic/compare) or [discuss with us](https://github.com/open-mmlab/mmagic/discussions/1429)!
diff --git a/docs/en/advanced_guides/evaluator.md b/docs/en/advanced_guides/evaluator.md
deleted file mode 100644
index 2e7f365206..0000000000
--- a/docs/en/advanced_guides/evaluator.md
+++ /dev/null
@@ -1,222 +0,0 @@
-# Evaluator
-
-## Evaluation Metrics and Evaluators
-
-In model validation and testing, it is usually necessary to quantitatively evaluate the accuracy of the model. In mmagic, the evaluation metrics and evaluators are implemented to accomplish this functionality.
-
-- Evaluation metrics are used to calculate specific model accuracy indicators based on test data and model prediction results. mmagic provides a variety of built-in metrics, which can be found in the metrics documentation. Additionally, metrics are decoupled from datasets and can be used for multiple datasets.
-- The evaluator is the top-level module for evaluation metrics and usually contains one or more metrics. The purpose of the evaluator is to perform necessary data format conversion and call evaluation metrics to calculate the model accuracy during model evaluation. The evaluator is typically built by a `Runner` or a testing script, which are used for online evaluation and offline evaluation, respectively.
-
-The evaluator in MMagic inherits from that in MMEngine and has a similar basic usage. For specific information, you can refer to [Model Accuracy Evaluation](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html). However, different from other high-level vision tasks, the evaluation metrics for generative models often have multiple inputs. For example, the input for the Inception Score (IS) metric is only fake images and any number of real images, while the Perceptual Path Length (PPL) requires sampling from the latent space. To accommodate different evaluation metrics, mmagic introduces two important methods, `prepare_metrics` and `prepare_samplers` to meet the above requirements.
-
-## prepare_metrics
-
-```python
-class Evaluator(Evaluator):
-	...
-    def prepare_metrics(self, module: BaseModel, dataloader: DataLoader):
-        """Prepare for metrics before evaluation starts. Some metrics use
-        pretrained model to extract feature. Some metrics use pretrained model
-        to extract feature and input channel order may vary among those models.
-        Therefore, we first parse the output color order from data
-        preprocessor and set the color order for each metric. Then we pass the
-        dataloader to each metrics to prepare pre-calculated items. (e.g.
-        inception feature of the real images). If metric has no pre-calculated
-        items, :meth:`metric.prepare` will be ignored. Once the function has
-        been called, :attr:`self.is_ready` will be set as `True`. If
-        :attr:`self.is_ready` is `True`, this function will directly return to
-        avoid duplicate computation.
-
-        Args:
-            module (BaseModel): Model to evaluate.
-            dataloader (DataLoader): The dataloader for real images.
-        """
-        if self.metrics is None:
-            self.is_ready = True
-            return
-
-        if self.is_ready:
-            return
-
-        # prepare metrics
-        for metric in self.metrics:
-            metric.prepare(module, dataloader)
-        self.is_ready = True
-```
-
-The `prepare_metrics` method needs to be called before the evaluation starts. It is used to preprocess before evaluating each metric, and will sequentially call the prepare method of each metric in the evaluator to prepare any pre-calculated elements needed for that metric (such as features from hidden layers). Additionally, to avoid repeated calls, the `evaluator.is_ready` flag will be set to True after preprocessing for all metrics is completed.
-
-```python
-class GenMetric(BaseMetric):
-	...
-    def prepare(self, module: nn.Module, dataloader: DataLoader) -> None:
-        """Prepare for the pre-calculating items of the metric. Defaults to do
-        nothing.
-
-        Args:
-            module (nn.Module): Model to evaluate.
-            dataloader (DataLoader): Dataloader for the real images.
-        """
-        if is_model_wrapper(module):
-            module = module.module
-        self.data_preprocessor = module.data_preprocessor
-```
-
-## prepare_samplers
-
-Different metrics require different inputs for generative models. For example, FID, KID, and IS only need the generated fake images, while PPL requires vectors from the latent space. Therefore, mmagic groups different evaluation metrics based on the type of input. One or more evaluation metrics in the same group share a data sampler. The sampler mode for each evaluation metric is determined by the `SAMPLER_MODE` attribute of that metric.
-
-```python
-class GenMetric(BaseMetric):
-	...
-    SAMPLER_MODE = 'normal'
-
-class GenerativeMetric(GenMetric):
-	...
-    SAMPLER_MODE = 'Generative'
-```
-
-The `prepare_samplers` method of the evaluator is responsible for preparing the data samplers based on the sampler mode of all evaluation metrics.
-
-```python
-class Evaluator(Evaluator):
-	...
-    def prepare_samplers(self, module: BaseModel, dataloader: DataLoader
-                         ) -> List[Tuple[List[BaseMetric], Iterator]]:
-        """Prepare for the sampler for metrics whose sampling mode are
-        different. For generative models, different metric need image
-        generated with different inputs. For example, FID, KID and IS need
-        images generated with random noise, and PPL need paired images on the
-        specific noise interpolation path. Therefore, we first group metrics
-        with respect to their sampler's mode (refers to
-        :attr:~`GenMetrics.SAMPLER_MODE`), and build a shared sampler for each
-        metric group. To be noted that, the length of the shared sampler
-        depends on the metric of the most images required in each group.
-
-        Args:
-            module (BaseModel): Model to evaluate. Some metrics (e.g. PPL)
-                require `module` in their sampler.
-            dataloader (DataLoader): The dataloader for real image.
-
-        Returns:
-            List[Tuple[List[BaseMetric], Iterator]]: A list of "metrics-shared
-                sampler" pair.
-        """
-        if self.metrics is None:
-            return [[[None], []]]
-
-        # grouping metrics based on `SAMPLER_MODE` and `sample_mode`
-        metric_mode_dict = defaultdict(list)
-        for metric in self.metrics:  # Specify a sampler group for each metric.
-            metric_md5 = self._cal_metric_hash(metric)
-            metric_mode_dict[metric_md5].append(metric)
-
-        metrics_sampler_list = []
-        for metrics in metric_mode_dict.values(): # Generate a sampler for each group.
-            first_metric = metrics[0]
-            metrics_sampler_list.append([
-                metrics,
-                first_metric.get_metric_sampler(module, dataloader, metrics)
-            ])
-
-        return metrics_sampler_list
-```
-
-The method will first check if it has any evaluation metrics to calculate: if not, it will return directly. If there are metrics to calculate, it will iterate through all the evaluation metrics and group them based on the sampler_mode and sample_model. The specific implementation is as follows: it calculates a hash code based on the sampler_mode and sample_model, and puts the evaluation metrics with the same hash code into the same list.
-
-```python
-class Evaluator(Evaluator):
-	...
-    @staticmethod
-    def _cal_metric_hash(metric: GenMetric):
-        """Calculate a unique hash value based on the `SAMPLER_MODE` and
-        `sample_model`."""
-        sampler_mode = metric.SAMPLER_MODE
-        sample_model = metric.sample_model
-        metric_dict = {
-            'SAMPLER_MODE': sampler_mode,
-            'sample_model': sample_model
-        }
-        if hasattr(metric, 'need_cond_input'):
-            metric_dict['need_cond_input'] = metric.need_cond_input
-        md5 = hashlib.md5(repr(metric_dict).encode('utf-8')).hexdigest()
-        return md5
-```
-
-Finally, this method will generate a sampler for each evaluation metric group and add it to a list to return.
-
-## Evaluation process of an evaluator
-
-The implementation of evaluation process can be found in `mmagic.engine.runner.MultiValLoop.run` and `mmagic.engine.runner.MultiTestLoop.run`. Here we take `mmagic.engine.runner.MultiValLoop.run` as example.
-
-```python
-class MultiValLoop(BaseLoop):
-	...
-    def run(self):
-	...
-        # 1. prepare all metrics and get the total length
-        metrics_sampler_lists = []
-        meta_info_list = []
-        dataset_name_list = []
-        for evaluator, dataloader in zip(self.evaluators, self.dataloaders):
-            # 1.1 prepare for metrics
-            evaluator.prepare_metrics(module, dataloader)
-            # 1.2 prepare for metric-sampler pair
-            metrics_sampler_list = evaluator.prepare_samplers(
-                module, dataloader)
-            metrics_sampler_lists.append(metrics_sampler_list)
-            # 1.3 update total length
-            self._total_length += sum([
-                len(metrics_sampler[1])
-                for metrics_sampler in metrics_sampler_list
-            ])
-            # 1.4 save metainfo and dataset's name
-            meta_info_list.append(
-                getattr(dataloader.dataset, 'metainfo', None))
-            dataset_name_list.append(dataloader.dataset.__class__.__name__)
-```
-
-First, the runner will perform preprocessing and obtain the necessary data samplers for evaluation using the `evaluator.prepare_metric` and `evaluator.prepare_samplers` methods. It will also update the total length of samples obtained using the samplers. As the evaluation metrics and dataset in mmagic are separated, some meta_info required for evaluation also needs to be saved and passed to the evaluator.
-
-```python
-class MultiValLoop(BaseLoop):
-	...
-    def run(self):
-	...
-        # 2. run evaluation
-        for idx in range(len(self.evaluators)):
-            # 2.1 set self.evaluator for run_iter
-            self.evaluator = self.evaluators[idx]
-            self.dataloader = self.dataloaders[idx]
-
-            # 2.2 update metainfo for evaluator and visualizer
-            meta_info = meta_info_list[idx]
-            dataset_name = dataset_name_list[idx]
-            if meta_info:
-                self.evaluator.dataset_meta = meta_info
-                self._runner.visualizer.dataset_meta = meta_info
-            else:
-                warnings.warn(
-                    f'Dataset {dataset_name} has no metainfo. `dataset_meta` '
-                    'in evaluator, metric and visualizer will be None.')
-
-            # 2.3 generate images
-            metrics_sampler_list = metrics_sampler_lists[idx]
-            for metrics, sampler in metrics_sampler_list:
-                for data in sampler:
-                    self.run_iter(idx_counter, data, metrics)
-                    idx_counter += 1
-
-            # 2.4 evaluate metrics and update multi_metric
-            metrics = self.evaluator.evaluate()
-            if multi_metric and metrics.keys() & multi_metric.keys():
-                raise ValueError('Please set different prefix for different'
-                                 ' datasets in `val_evaluator`')
-            else:
-                multi_metric.update(metrics)
-        # 3. finish evaluation and call hooks
-        self._runner.call_hook('after_val_epoch', metrics=multi_metric)
-        self._runner.call_hook('after_val')
-```
-
-After the preparation for evaluation is completed, the runner will iterate through all the evaluators and perform the evaluation one by one. Each evaluator needs to correspond to a data loader to complete the evaluation work for a dataset. Specifically, during the evaluation process for each evaluator, it is necessary to pass the required meta_info to the evaluator, then iterate through all the metrics_samplers of this evaluator to generate the images needed for evaluation, and finally complete the evaluation.
diff --git a/docs/en/advanced_guides/structures.md b/docs/en/advanced_guides/structures.md
deleted file mode 100644
index a0b935272f..0000000000
--- a/docs/en/advanced_guides/structures.md
+++ /dev/null
@@ -1,106 +0,0 @@
-# Data Structure
-
-`DataSample` , the data structure interface of MMagic, inherits from [` BaseDataElement`](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/data_element.html). The base class has implemented basic add/delete/update/check functions and supports data migration between different devices, as well as dictionary-like and tensor-like operations, which also allows the interfaces of different algorithms to be unified.
-
-Specifically, an instance of BaseDataElement consists of two components:
-
-- `metainfo`, which contains some meta information,
-  e.g., `img_shape`, `img_id`, `color_order`, etc.
-- `data`, which contains the data used in the loop.
-
-Thanks to ` DataSample` , the data flow between each module in the algorithm libraries, such as [`visualizer`](https://mmagic.readthedocs.io/en/latest/user_guides/visualization.html), [`evaluator`](https://mmagic.readthedocs.io/en/latest/advanced_guides/evaluator.html), [`model`](https://mmagic.readthedocs.io/en/latest/howto/models.html), is greatly simplified.
-
-The attributes in `DataSample` are divided into several parts:
-
-```python
-- ``gt_img``: Ground truth image(s).
-- ``pred_img``: Image(s) of model predictions.
-- ``ref_img``: Reference image(s).
-- ``mask``: Mask in Inpainting.
-- ``trimap``: Trimap in Matting.
-- ``gt_alpha``: Ground truth alpha image in Matting.
-- ``pred_alpha``: Predicted alpha image in Matting.
-- ``gt_fg``: Ground truth foreground image in Matting.
-- ``pred_fg``: Predicted foreground image in Matting.
-- ``gt_bg``: Ground truth background image in Matting.
-- ``pred_bg``: Predicted background image in Matting.
-- ``gt_merged``: Ground truth merged image in Matting.
-```
-
-The following sample code demonstrates the components of `DataSample`:
-
-```python
-     >>> import torch
-     >>> import numpy as np
-     >>> from mmagic.structures import DataSample
-     >>> img_meta = dict(img_shape=(800, 1196, 3))
-     >>> img = torch.rand((3, 800, 1196))
-     >>> data_sample = DataSample(gt_img=img, metainfo=img_meta)
-     >>> assert 'img_shape' in data_sample.metainfo_keys()
-     >>> data_sample
-	 >>># metainfo and data of DataSample
-    <DataSample(
-
-        META INFORMATION
-        img_shape: (800, 1196, 3)
-
-        DATA FIELDS
-        gt_img: tensor(3, 800, 1196)
-    ) at 0x1f6a5a99a00>
-```
-
-We also support `stack` and `split` operation to handle a batch of data samples.
-
-1. Stack
-
-Stack a list of data samples to one. All tensor fields will be stacked at first dimension. Otherwise the values will be saved in a list.
-
-```
-    Args:
-        data_samples (Sequence['DataSample']): A sequence of `DataSample` to stack.
-
-    Returns:
-        DataSample: The stacked data sample.
-```
-
-2. Split
-
-Split a sequence of data sample in the first dimension.
-
-```
-	Args:
-         allow_nonseq_value (bool): Whether allow non-sequential data in
-         split operation. If True, non-sequential data will be copied
-         for all split data samples. Otherwise, an error will be
-         raised. Defaults to False.
-
-    Returns:
-         Sequence[DataSample]: The list of data samples after splitting.
-```
-
-The following sample code demonstrates the use of `stack` and ` split`:
-
-```py
-import torch
-import numpy as np
-from mmagic.structures import DataSample
-img_meta1 = img_meta2 = dict(img_shape=(800, 1196, 3))
-img1 = torch.rand((3, 800, 1196))
-img2 = torch.rand((3, 800, 1196))
-data_sample1 = DataSample(gt_img=img1, metainfo=img_meta1)
-data_sample2 = DataSample(gt_img=img2, metainfo=img_meta1)
-```
-
-```py
-# stack them and then use as batched-tensor!
-data_sample = DataSample.stack([data_sample1, data_sample2])
-print(data_sample.gt_img.shape)
-    torch.Size([2, 3, 800, 1196])
-print(data_sample.metainfo)
-    {'img_shape': [(800, 1196, 3), (800, 1196, 3)]}
-
-# split them if you want
-data_sample1_, data_sample2_ = data_sample.split()
-assert (data_sample1_.gt_img == img1).all()
-assert (data_sample2_.gt_img == img2).all()
-```
diff --git a/docs/en/changelog.md b/docs/en/changelog.md
deleted file mode 100644
index 2f8101b724..0000000000
--- a/docs/en/changelog.md
+++ /dev/null
@@ -1,562 +0,0 @@
-# Changelog
-
-## v1.1.0 (22/09/2023)
-
-**Highlights**
-
-In this new version of MMagic, we have added support for the following five new algorithms.
-
-- Support ViCo, a new SD personalization method. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/vico/README.md)
-
-<table align="center">
-<thead>
-  <tr>
-    <td>
-<div align="center">
-  <img src="https://github.com/open-mmlab/mmagic/assets/71176040/58a6953c-053a-40ea-8826-eee428c992b5" width="800"/>
-  <br/>
-</thead>
-</table>
-
-- Support AnimateDiff, a popular text2animation method. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/animatediff/README.md)
-
-![512](https://github.com/ElliotQi/mmagic/assets/46469021/54d92aca-dfa9-4eeb-ba38-3f6c981e5399)
-
-- Support SDXL. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/stable_diffusion_xl/README.md)
-
-<div align=center>
-<img src="https://github.com/okotaku/diffengine/assets/24734142/27d4ebad-5705-4500-826f-41f425a08c0d"/>
-</div>
-
-- Support DragGAN implementation with MMagic. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/draggan/README.md)
-
-<div align=center>
-<img src="https://github.com/open-mmlab/mmagic/assets/55343765/7c397bd0-fa07-48fe-8a7c-a4022907404b"/>
-</div>
-
-- Support for FastComposer. [Click to View](https://github.com/open-mmlab/mmagic/blob/main/configs/fastcomposer/README.md)
-
-<div align=center>
-<img src="https://user-images.githubusercontent.com/14927720/265914135-8a25789c-8d30-40cb-8ac5-e3bd3b617aac.png">
-</div>
-
-**New Features & Improvements**
-
-- \[Feature\] Support inference with diffusers pipeline, sd_xl first. by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2023
-- \[Enhance\] add negative prompt for sd inferencer by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2021
-- \[Enhance\] Update flake8 checking config in setup.cfg by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/2007
-- \[Enhance\] Add ‘config_name' as a supplement to the 'model_setting' by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2027
-- \[Enhance\] faster test by @okotaku in https://github.com/open-mmlab/mmagic/pull/2034
-- \[Enhance\] Add OpenXLab Badge by @ZhaoQiiii in https://github.com/open-mmlab/mmagic/pull/2037
-
-**CodeCamp Contributions**
-
-- \[CodeCamp2023-643\] Add new configs of BigGAN by @limafang in https://github.com/open-mmlab/mmagic/pull/2003
-- \[CodeCamp2023-648\] MMagic new config GuidedDiffusion by @ooooo-create in https://github.com/open-mmlab/mmagic/pull/2005
-- \[CodeCamp2023-649\] MMagic new config Instance Colorization by @ooooo-create in https://github.com/open-mmlab/mmagic/pull/2010
-- \[CodeCamp2023-652\] MMagic new config StyleGAN3 by @hhy150 in https://github.com/open-mmlab/mmagic/pull/2018
-- \[CodeCamp2023-653\] Add new configs of Real BasicVSR by @RangeKing in https://github.com/open-mmlab/mmagic/pull/2030
-
-**Bug Fixes**
-
-- \[Fix\] Fix best practice and back to contents on mainpage, add new models to model zoo by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2001
-- \[Fix\] Check CI error and remove main stream gpu test by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2013
-- \[Fix\] Check circle ci memory by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2016
-- \[Fix\] remove code and fix clip loss ut test by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2017
-- \[Fix\] mock infer in diffusers pipeline inferencer ut. by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2026
-- \[Fix\] Fix bug caused by merging draggan by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2029
-- \[Fix\] Update QRcode by @crazysteeaam in https://github.com/open-mmlab/mmagic/pull/2009
-- \[Fix\] Replace the download links in README with OpenXLab version by @FerryHuang in https://github.com/open-mmlab/mmagic/pull/2038
-- \[Fix\] Increase docstring coverage by @liuwenran in https://github.com/open-mmlab/mmagic/pull/2039
-
-**New Contributors**
-
-- @limafang made their first contribution in https://github.com/open-mmlab/mmagic/pull/2003
-- @ooooo-create made their first contribution in https://github.com/open-mmlab/mmagic/pull/2005
-- @hhy150 made their first contribution in https://github.com/open-mmlab/mmagic/pull/2018
-- @ZhaoQiiii made their first contribution in https://github.com/open-mmlab/mmagic/pull/2037
-- @ElliotQi made their first contribution in https://github.com/open-mmlab/mmagic/pull/1980
-- @Beaconsyh08 made their first contribution in https://github.com/open-mmlab/mmagic/pull/2012
-
-**Full Changelog**: https://github.com/open-mmlab/mmagic/compare/v1.0.2...v1.0.3
-
-## v1.0.2 (24/08/2023)
-
-**Highlights**
-
-**1. More detailed documentation**
-
-Thank you to the community contributors for helping us improve the documentation. We have improved many documents, including both Chinese and English versions. Please refer to the [documentation](https://mmagic.readthedocs.io/en/latest/) for more details.
-
-**2. New algorithms**
-
-- Support Prompt-to-prompt, DDIM Inversion and Null-text Inversion. [Click to View.](https://github.com/open-mmlab/mmagic/blob/main/projects/prompt_to_prompt/README.md)
-
-From right to left: origin image, DDIM inversion, Null-text inversion
-
-<center class="half">
-    <img src="https://github.com/FerryHuang/mmagic/assets/71176040/34d8a467-5378-41fb-83c6-b23c9dee8f0a" width="200"/><img src="https://github.com/FerryHuang/mmagic/assets/71176040/3d3814b4-7fb5-4232-a56f-fd7fef0ba28e" width="200"/><img src="https://github.com/FerryHuang/mmagic/assets/71176040/43008ed4-a5a3-4f81-ba9f-95d9e79e6a08" width="200"/>
-</center>
-
-Prompt-to-prompt Editing
-
-<div align="center">
-  <b>cat -> dog</b>
-  <br/>
-  <img src="https://github.com/FerryHuang/mmagic/assets/71176040/f5d3fc0c-aa7b-4525-9364-365b254d51ca" width="500"/>
-</div>
-
-<div align="center">
-  <b>spider man -> iron man(attention replace)</b>
-  <br/>
-  <img src="https://github.com/FerryHuang/mmagic/assets/71176040/074adbc6-bd48-4c82-99aa-f322cf937f5a" width="500"/>
-</div>
-
-<div align="center">
-  <b>Effel tower -> Effel tower at night (attention refine)</b>
-  <br/>
-  <img src="https://github.com/FerryHuang/mmagic/assets/71176040/f815dab3-b20c-4936-90e3-a060d3717e22" width="500"/>
-</div>
-
-<div align="center">
-  <b>blossom sakura tree -> blossom(-3) sakura tree (attention reweight)</b>
-  <br/>
-  <img src="https://github.com/FerryHuang/mmagic/assets/71176040/5ef770b9-4f28-4ae7-84b0-6c15ea7450e9" width="500"/>
-</div>
-
-- Support Textual Inversion. [Click to view.](https://github.com/open-mmlab/mmagic/blob/main/configs/textual_inversion/README.md)
-
-<div align=center>
-<img src="https://github.com/open-mmlab/mmagic/assets/28132635/b2dac6f1-5151-4199-bcc2-71b5b1523a16">
-</div>
-
-- Support Attention Injection for more stable video generation with controlnet. [Click to view.](https://github.com/open-mmlab/mmagic/blob/main/configs/controlnet_animation/README.md)
-- Support Stable Diffusion Inpainting. [Click to view.](https://github.com/open-mmlab/mmagic/blob/main/configs/stable_diffusion/README.md)
-
-**New Features & Improvements**
-
-- \[Enhancement\] Support noise offset in stable diffusion training by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1880
-- \[Community\] Support Glide Upsampler by @Taited in https://github.com/open-mmlab/mmagic/pull/1663
-- \[Enhance\] support controlnet inferencer by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1891
-- \[Feature\] support Albumentations augmentation transformations and pipeline by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1894
-- \[Feature\] Add Attention Injection for unet by @liuwenran in https://github.com/open-mmlab/mmagic/pull/1895
-- \[Enhance\] update benchmark scripts by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1907
-- \[Enhancement\] update mmagic docs by @crazysteeaam in https://github.com/open-mmlab/mmagic/pull/1920
-- \[Enhancement\] Support Prompt-to-prompt, ddim inversion and null-text inversion by @FerryHuang in https://github.com/open-mmlab/mmagic/pull/1908
-- \[CodeCamp2023-302\] Support MMagic visualization and write a user guide  by @aptsunny in https://github.com/open-mmlab/mmagic/pull/1939
-- \[Feature\] Support Textual Inversion by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1822
-- \[Feature\] Support stable diffusion inpaint by @Taited in https://github.com/open-mmlab/mmagic/pull/1976
-- \[Enhancement\] Adopt `BaseModule` for some models by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1543
-- \[MMSIG\]支持 DeblurGANv2 inference by @xiaomile in https://github.com/open-mmlab/mmagic/pull/1955
-- \[CodeCamp2023-647\] Add new configs of EG3D by @RangeKing in https://github.com/open-mmlab/mmagic/pull/1985
-
-**Bug Fixes**
-
-- Fix dtype error in StableDiffusion and DreamBooth training by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1879
-- Fix gui VideoSlider bug by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1885
-- Fix init_model and glide demo by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1888
-- Fix InstColorization bug when dim=3 by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1901
-- Fix sd and controlnet fp16 bugs by @Z-Fran in https://github.com/open-mmlab/mmagic/pull/1914
-- Fix num_images_per_prompt in controlnet by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1936
-- Revise metafile for sd-inpainting to fix inferencer init by @LeoXing1996 in https://github.com/open-mmlab/mmagic/pull/1995
-
-**New Contributors**
-
-- @wyyang23 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1886
-- @yehuixie made their first contribution in https://github.com/open-mmlab/mmagic/pull/1912
-- @crazysteeaam made their first contribution in https://github.com/open-mmlab/mmagic/pull/1920
-- @BUPT-NingXinyu made their first contribution in https://github.com/open-mmlab/mmagic/pull/1921
-- @zhjunqin made their first contribution in https://github.com/open-mmlab/mmagic/pull/1918
-- @xuesheng1031 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1923
-- @wslgqq277g made their first contribution in https://github.com/open-mmlab/mmagic/pull/1934
-- @LYMDLUT made their first contribution in https://github.com/open-mmlab/mmagic/pull/1933
-- @RangeKing made their first contribution in https://github.com/open-mmlab/mmagic/pull/1930
-- @xin-li-67 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1932
-- @chg0901 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1931
-- @aptsunny made their first contribution in https://github.com/open-mmlab/mmagic/pull/1939
-- @YanxingLiu made their first contribution in https://github.com/open-mmlab/mmagic/pull/1943
-- @tackhwa made their first contribution in https://github.com/open-mmlab/mmagic/pull/1937
-- @Geo-Chou made their first contribution in https://github.com/open-mmlab/mmagic/pull/1940
-- @qsun1 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1956
-- @ththth888 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1961
-- @sijiua made their first contribution in https://github.com/open-mmlab/mmagic/pull/1967
-- @MING-ZCH made their first contribution in https://github.com/open-mmlab/mmagic/pull/1982
-- @AllYoung made their first contribution in https://github.com/open-mmlab/mmagic/pull/1996
-
-## v1.0.1 (26/05/2023)
-
-**New Features & Improvements**
-
-- Support tomesd for StableDiffusion speed-up. [#1801](https://github.com/open-mmlab/mmagic/pull/1801)
-- Support all inpainting/matting/image restoration models inferencer. [#1833](https://github.com/open-mmlab/mmagic/pull/1833), [#1873](https://github.com/open-mmlab/mmagic/pull/1873)
-- Support animated drawings at projects. [#1837](https://github.com/open-mmlab/mmagic/pull/1837)
-- Support Style-Based Global Appearance Flow for Virtual Try-On at projects. [#1786](https://github.com/open-mmlab/mmagic/pull/1786)
-- Support tokenizer wrapper and support EmbeddingLayerWithFixe. [#1846](https://github.com/open-mmlab/mmagic/pull/1846)
-
-**Bug Fixes**
-
-- Fix install requirements. [#1819](https://github.com/open-mmlab/mmagic/pull/1819)
-- Fix inst-colorization PackInputs. [#1828](https://github.com/open-mmlab/mmagic/pull/1828), [#1827](https://github.com/open-mmlab/mmagic/pull/1827)
-- Fix inferencer in pip-install. [#1875](https://github.com/open-mmlab/mmagic/pull/1875)
-
-**New Contributors**
-
-- @XDUWQ made their first contribution in https://github.com/open-mmlab/mmagic/pull/1830
-- @FerryHuang made their first contribution in https://github.com/open-mmlab/mmagic/pull/1786
-- @bobo0810 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1851
-- @jercylew made their first contribution in https://github.com/open-mmlab/mmagic/pull/1874
-
-## v1.0.0 (25/04/2023)
-
-We are excited to announce the release of MMagic v1.0.0 that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration).
-
-![mmagic-log](https://user-images.githubusercontent.com/49083766/233557648-9034f5a0-c85d-4092-b700-3a28072251b6.png)
-
-Since its inception, MMEditing has been the preferred algorithm library for many super-resolution, editing, and generation tasks, helping research teams win more than 10 top international competitions and supporting over 100 GitHub ecosystem projects. After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN.
-
-Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation).
-
-In MMagic, we have supported 53+ models in multiple tasks such as fine-tuning for stable diffusion, text-to-image, image and video restoration, super-resolution, editing and generation. With excellent training and experiment management support from [MMEngine](https://github.com/open-mmlab/mmengine), MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey. With MMagic, experience more magic in generation! Let's open a new era beyond editing together. More than Editing, Unlock the Magic!
-
-**Highlights**
-
-**1. New Models**
-
-We support 11 new models in 4 new tasks.
-
-- Text2Image / Diffusion
-  - ControlNet
-  - DreamBooth
-  - Stable Diffusion
-  - Disco Diffusion
-  - GLIDE
-  - Guided Diffusion
-- 3D-aware Generation
-  - EG3D
-- Image Restoration
-  - NAFNet
-  - Restormer
-  - SwinIR
-- Image Colorization
-  - InstColorization
-
-https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4
-
-**2. Magic Diffusion Model**
-
-For the Diffusion Model, we provide the following "magic" :
-
-- Support image generation based on Stable Diffusion and Disco Diffusion.
-
-- Support Finetune methods such as Dreambooth and DreamBooth LoRA.
-
-- Support controllability in text-to-image generation using ControlNet.
-  ![de87f16f-bf6d-4a61-8406-5ecdbb9167b6](https://user-images.githubusercontent.com/49083766/233558077-2005e603-c5a8-49af-930f-e7a465ca818b.png)
-
-- Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.
-
-- Support video generation based on MultiFrame Render.
-  MMagic supports the generation of long videos in various styles through ControlNet and MultiFrame Render.
-  prompt keywords: a handsome man, silver hair, smiling, play basketball
-
-  https://user-images.githubusercontent.com/12782558/227149757-fd054d32-554f-45d5-9f09-319184866d85.mp4
-
-  prompt keywords: a girl, black hair, white pants, smiling, play basketball
-
-  https://user-images.githubusercontent.com/49083766/233559964-bd5127bd-52f6-44b6-a089-9d7adfbc2430.mp4
-
-  prompt keywords: a handsome man
-
-  https://user-images.githubusercontent.com/12782558/227152129-d70d5f76-a6fc-4d23-97d1-a94abd08f95a.mp4
-
-- Support calling basic models and sampling strategies through DiffuserWrapper.
-
-- SAM + MMagic = Generate Anything!
-  SAM (Segment Anything Model) is a popular model these days and can also provide more support for MMagic! If you want to create your own animation, you can go to [OpenMMLab PlayGround](https://github.com/open-mmlab/playground/blob/main/mmediting_sam/README.md).
-
-  https://user-images.githubusercontent.com/49083766/233562228-f39fc675-326c-4ae8-986a-c942059effd0.mp4
-
-**3. Upgraded Framework**
-
-To improve your "spellcasting" efficiency, we have made the following adjustments to the "magic circuit":
-
-- By using MMEngine and MMCV of OpenMMLab 2.0 framework, We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs.
-- Support for 33+ algorithms accelerated by Pytorch 2.0.
-- Refactor DataSample to support the combination and splitting of batch dimensions.
-- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
-- Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.
-- Support visualization on local files or using tensorboard and wandb.
-
-**New Features & Improvements**
-
-- Support 53+ algorithms, 232+ configs, 213+ checkpoints, 26+ loss functions, and 20+ metrics.
-- Support controlnet animation and Gradio gui. [Click to view.](https://github.com/open-mmlab/mmagic/tree/main/configs/controlnet_animation)
-- Support Inferencer and Demo using High-level Inference APIs. [Click to view.](https://github.com/open-mmlab/mmagic/tree/main/demo)
-- Support Gradio gui of Inpainting inference. [Click to view.](https://github.com/open-mmlab/mmagic/blob/main/demo/gradio-demo.py)
-- Support qualitative comparison tools. [Click to view.](https://github.com/open-mmlab/mmagic/tree/main/tools/gui)
-- Enable projects. [Click to view.](https://github.com/open-mmlab/mmagic/tree/main/projects)
-- Improve converters scripts and documents for datasets. [Click to view.](https://github.com/open-mmlab/mmagic/tree/main/tools/dataset_converters)
-
-## v1.0.0rc7 (07/04/2023)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc7. This release supports 51+ models, 226+ configs and 212+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- Support DiffuserWrapper
-- Support ControlNet (training and inference).
-- Support PyTorch 2.0.
-
-**New Features & Improvements**
-
-- Support DiffuserWrapper. [#1692](https://github.com/open-mmlab/mmagic/pull/1692)
-- Support ControlNet (training and inference). [#1744](https://github.com/open-mmlab/mmagic/pull/1744)
-- Support PyTorch 2.0 (successfully compile 33+ models on 'inductor' backend). [#1742](https://github.com/open-mmlab/mmagic/pull/1742)
-- Support Image Super-Resolution and Video Super-Resolution models inferencer. [#1662](https://github.com/open-mmlab/mmagic/pull/1662), [#1720](https://github.com/open-mmlab/mmagic/pull/1720)
-- Refactor tools/get_flops script. [#1675](https://github.com/open-mmlab/mmagic/pull/1675)
-- Refactor dataset_converters and documents for datasets. [#1690](https://github.com/open-mmlab/mmagic/pull/1690)
-- Move stylegan ops to MMCV. [#1383](https://github.com/open-mmlab/mmagic/pull/1383)
-
-**Bug Fixes**
-
-- Fix disco inferencer. [#1673](https://github.com/open-mmlab/mmagic/pull/1673)
-- Fix nafnet optimizer config. [#1716](https://github.com/open-mmlab/mmagic/pull/1716)
-- Fix tof typo. [#1711](https://github.com/open-mmlab/mmagic/pull/1711)
-
-**Contributors**
-
-A total of 8 developers contributed to this release.
-Thanks @LeoXing1996, @Z-Fran, @plyfager, @zengyh1900, @liuwenran, @ryanxingql, @HAOCHENYE, @VongolaWu
-
-**New Contributors**
-
-- @HAOCHENYE made their first contribution in https://github.com/open-mmlab/mmagic/pull/1712
-
-## v1.0.0rc6 (02/03/2023)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc6. This release supports 50+ models, 222+ configs and 209+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- Support Gradio gui of Inpainting inference.
-- Support Colorization, Translationin and GAN models inferencer.
-
-**New Features & Improvements**
-
-- Refactor FileIO. [#1572](https://github.com/open-mmlab/mmagic/pull/1572)
-- Refactor registry. [#1621](https://github.com/open-mmlab/mmagic/pull/1621)
-- Refactor Random degradations. [#1583](https://github.com/open-mmlab/mmagic/pull/1583)
-- Refactor DataSample, DataPreprocessor, Metric and Loop. [#1656](https://github.com/open-mmlab/mmagic/pull/1656)
-- Use mmengine.basemodule instead of nn.module. [#1491](https://github.com/open-mmlab/mmagic/pull/1491)
-- Refactor Main Page. [#1609](https://github.com/open-mmlab/mmagic/pull/1609)
-- Support Gradio gui of Inpainting inference. [#1601](https://github.com/open-mmlab/mmagic/pull/1601)
-- Support Colorization inferencer. [#1588](https://github.com/open-mmlab/mmagic/pull/1588)
-- Support Translation models inferencer. [#1650](https://github.com/open-mmlab/mmagic/pull/1650)
-- Support GAN models inferencer. [#1653](https://github.com/open-mmlab/mmagic/pull/1653), [#1659](https://github.com/open-mmlab/mmagic/pull/1659)
-- Print config tool. [#1590](https://github.com/open-mmlab/mmagic/pull/1590)
-- Improve type hints. [#1604](https://github.com/open-mmlab/mmagic/pull/1604)
-- Update Chinese documents of metrics and datasets. [#1568](https://github.com/open-mmlab/mmagic/pull/1568), [#1638](https://github.com/open-mmlab/mmagic/pull/1638)
-- Update Chinese documents of BigGAN and Disco-Diffusion. [#1620](https://github.com/open-mmlab/mmagic/pull/1620)
-- Update Evaluation and README of Guided-Diffusion. [#1547](https://github.com/open-mmlab/mmagic/pull/1547)
-
-**Bug Fixes**
-
-- Fix the meaning of `momentum` in EMA. [#1581](https://github.com/open-mmlab/mmagic/pull/1581)
-- Fix output dtype of RandomNoise. [#1585](https://github.com/open-mmlab/mmagic/pull/1585)
-- Fix pytorch2onnx tool. [#1629](https://github.com/open-mmlab/mmagic/pull/1629)
-- Fix API documents. [#1641](https://github.com/open-mmlab/mmagic/pull/1641), [#1642](https://github.com/open-mmlab/mmagic/pull/1642)
-- Fix loading RealESRGAN EMA weights. [#1647](https://github.com/open-mmlab/mmagic/pull/1647)
-- Fix arg passing bug of dataset_converters scripts. [#1648](https://github.com/open-mmlab/mmagic/pull/1648)
-
-**Contributors**
-
-A total of 17 developers contributed to this release.
-Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @liuwenran, @austinmw, @dienachtderwelt, @liangzelong, @i-aki-y, @xiaomile, @Li-Qingyun, @vansin, @Luo-Yihang, @ydengbi, @ruoningYu, @triple-Mu
-
-**New Contributors**
-
-- @dienachtderwelt made their first contribution in https://github.com/open-mmlab/mmagic/pull/1578
-- @i-aki-y made their first contribution in https://github.com/open-mmlab/mmagic/pull/1590
-- @triple-Mu made their first contribution in https://github.com/open-mmlab/mmagic/pull/1618
-- @Li-Qingyun made their first contribution in https://github.com/open-mmlab/mmagic/pull/1640
-- @Luo-Yihang made their first contribution in https://github.com/open-mmlab/mmagic/pull/1648
-- @ydengbi made their first contribution in https://github.com/open-mmlab/mmagic/pull/1557
-
-## v1.0.0rc5 (04/01/2023)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc5. This release supports 49+ models, 180+ configs and 177+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- Support Restormer.
-- Support GLIDE.
-- Support SwinIR.
-- Support Stable Diffusion.
-
-**New Features & Improvements**
-
-- Disco notebook. (#1507)
-- Revise test requirements and CI. (#1514)
-- Recursive generate summary and docstring. (#1517)
-- Enable projects. (#1526)
-- Support mscoco dataset. (#1520)
-- Improve Chinese documents. (#1532)
-- Type hints. (#1481)
-- Update download link of checkpoints. (#1554)
-- Update deployment guide. (#1551)
-
-**Bug Fixes**
-
-- Fix documentation link checker. (#1522)
-- Fix ssim first channel bug. (#1515)
-- Fix extract_gt_data of realesrgan. (#1542)
-- Fix model index. (#1559)
-- Fix config path in disco-diffusion. (#1553)
-- Fix text2image inferencer. (#1523)
-
-**Contributors**
-
-A total of 16 developers contributed to this release.
-Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @liuwenran, @AlexZou14, @lvhan028, @xiaomile, @ldr426, @austin273, @whu-lee, @willaty, @curiosity654, @Zdafeng, @Taited
-
-**New Contributors**
-
-- @xiaomile made their first contribution in https://github.com/open-mmlab/mmagic/pull/1481
-- @ldr426 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1542
-- @austin273 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1553
-- @whu-lee made their first contribution in https://github.com/open-mmlab/mmagic/pull/1539
-- @willaty made their first contribution in https://github.com/open-mmlab/mmagic/pull/1541
-- @curiosity654 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1556
-- @Zdafeng made their first contribution in https://github.com/open-mmlab/mmagic/pull/1476
-- @Taited made their first contribution in https://github.com/open-mmlab/mmagic/pull/1534
-
-## v1.0.0rc4 (05/12/2022)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc4. This release supports 45+ models, 176+ configs and 175+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- Support High-level APIs.
-- Support diffusion models.
-- Support Text2Image Task.
-- Support 3D-Aware Generation.
-
-**New Features & Improvements**
-
-- Refactor High-level APIs. (#1410)
-- Support disco-diffusion text-2-image. (#1234, #1504)
-- Support EG3D. (#1482, #1493, #1494, #1499)
-- Support NAFNet model. (#1369)
-
-**Bug Fixes**
-
-- fix srgan train config. (#1441)
-- fix cain config. (#1404)
-- fix rdn and srcnn train configs. (#1392)
-
-**Contributors**
-
-A total of 14 developers contributed to this release.
-Thanks @plyfager, @LeoXing1996, @Z-Fran, @zengyh1900, @VongolaWu, @gaoyang07, @ChangjianZhao, @zxczrx123, @jackghosts, @liuwenran, @CCODING04, @RoseZhao929, @shaocongliu, @liangzelong.
-
-**New Contributors**
-
-- @gaoyang07 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1372
-- @ChangjianZhao made their first contribution in https://github.com/open-mmlab/mmagic/pull/1461
-- @zxczrx123 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1462
-- @jackghosts made their first contribution in https://github.com/open-mmlab/mmagic/pull/1463
-- @liuwenran made their first contribution in https://github.com/open-mmlab/mmagic/pull/1410
-- @CCODING04 made their first contribution in https://github.com/open-mmlab/mmagic/pull/783
-- @RoseZhao929 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1474
-- @shaocongliu made their first contribution in https://github.com/open-mmlab/mmagic/pull/1470
-- @liangzelong made their first contribution in https://github.com/open-mmlab/mmagic/pull/1488
-
-## v1.0.0rc3 (10/11/2022)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc3. This release supports 43+ models, 170+ configs and 169+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- convert `mmdet` and `clip` to optional requirements.
-
-**New Features & Improvements**
-
-- Support `try_import` for `mmdet`. (#1408)
-- Support `try_import` for `flip`. (#1420)
-- Update `.gitignore`. ($1416)
-- Set `real_feat` to cpu in `inception_utils`. (#1415)
-- Modify README and configs of StyleGAN2 and PEGAN (#1418)
-- Improve the rendering of Docs-API (#1373)
-
-**Bug Fixes**
-
-- Revise config and pretrain model loading in ESRGAN (#1407)
-- Revise config of LSGAN (#1409)
-- Revise config of CAIN (#1404)
-
-**Contributors**
-
-A total of 5 developers contributed to this release.
-@Z-Fran, @zengyh1900, @plyfager, @LeoXing1996, @ruoningYu.
-
-## v1.0.0rc2 (02/11/2022)
-
-**Highlights**
-
-We are excited to announce the release of MMEditing 1.0.0rc2. This release supports 43+ models, 170+ configs and 169+ checkpoints in MMGeneration and MMEditing. We highlight the following new features
-
-- patch-based and slider-based image and video comparison viewer.
-- image colorization.
-
-**New Features & Improvements**
-
-- Support qualitative comparison tools. (#1303)
-- Support instance aware colorization. (#1370)
-- Support multi-metrics with different sample-model. (#1171)
-- Improve the implementation
-  - refactoring evaluation metrics. (#1164)
-  - Save gt images in PGGAN's `forward`. (#1332)
-  - Improve type and change default number of `preprocess_div2k_dataset.py`. (#1380)
-  - Support pixel value clip in visualizer. (#1365)
-  - Support SinGAN Dataset and SinGAN demo. (#1363)
-  - Avoid cast int and float in GenDataPreprocessor. (#1385)
-- Improve the documentation
-  - Update a menu switcher. (#1162)
-  - Fix TTSR's README. (#1325)
-
-**Bug Fixes**
-
-- Fix PPL bug. (#1172)
-- Fix RDN number of channels. (#1328)
-- Fix types of exceptions in demos. (#1372)
-- Fix realesrgan ema. (#1341)
-- Improve the assertion to ensuer `GenerateFacialHeatmap` as `np.float32`. (#1310)
-- Fix sampling behavior of `unpaired_dataset.py` and  urls in cyclegan's README. (#1308)
-- Fix vsr models in pytorch2onnx. (#1300)
-- Fix incorrect settings in configs. (#1167,#1200,#1236,#1293,#1302,#1304,#1319,#1331,#1336,#1349,#1352,#1353,#1358,#1364,#1367,#1384,#1386,#1391,#1392,#1393)
-
-**New Contributors**
-
-- @gaoyang07 made their first contribution in https://github.com/open-mmlab/mmagic/pull/1372
-
-**Contributors**
-
-A total of 7 developers contributed to this release.
-Thanks @LeoXing1996, @Z-Fran, @zengyh1900, @plyfager, @ryanxingql, @ruoningYu, @gaoyang07.
-
-## v1.0.0rc1(23/9/2022)
-
-MMEditing 1.0.0rc1 has merged MMGeneration 1.x.
-
-- Support 42+ algorithms, 169+ configs and 168+ checkpoints.
-- Support 26+ loss functions, 20+ metrics.
-- Support tensorboard, wandb.
-- Support unconditional GANs, conditional GANs, image2image translation and internal learning.
-
-## v1.0.0rc0(31/8/2022)
-
-MMEditing 1.0.0rc0 is the first version of MMEditing 1.x, a part of the OpenMMLab 2.0 projects.
-
-Built upon the new [training engine](https://github.com/open-mmlab/mmengine), MMEditing 1.x unifies the interfaces of dataset, models, evaluation, and visualization.
-
-And there are some BC-breaking changes. Please check [the migration tutorial](https://mmagic.readthedocs.io/en/latest/migration/overview.html) for more details.
diff --git a/docs/en/community/contributing.md b/docs/en/community/contributing.md
deleted file mode 100644
index 5ac6ec4570..0000000000
--- a/docs/en/community/contributing.md
+++ /dev/null
@@ -1,275 +0,0 @@
-# Contributing guidance
-
-Welcome to the MMagic community, we are committed to building a Multimodal Advanced, Generative, and Intelligent Creation Toolbox.
-
-This section introduces following contents:
-
-- [Contributing guidance](#contributing-guidance)
-  - [Pull Request Workflow](#pull-request-workflow)
-    - [1. Fork and clone](#1-fork-and-clone)
-    - [2. Configure pre-commit](#2-configure-pre-commit)
-    - [3. Create a development branch](#3-create-a-development-branch)
-    - [4. Commit the code and pass the unit test](#4-commit-the-code-and-pass-the-unit-test)
-    - [5. Push the code to remote](#5-push-the-code-to-remote)
-    - [6. Create a Pull Request](#6-create-a-pull-request)
-    - [7. Resolve conflicts](#7-resolve-conflicts)
-  - [Guidance](#guidance)
-    - [Unit test](#unit-test)
-    - [Document rendering](#document-rendering)
-  - [Code style](#code-style)
-    - [Python](#python)
-    - [C++ and CUDA](#c-and-cuda)
-  - [PR Specs](#pr-specs)
-
-All kinds of contributions are welcomed, including but not limited to
-
-**Fix bug**
-
-You can directly post a Pull Request to fix typo in code or documents
-
-The steps to fix the bug of code implementation are as follows.
-
-1. If the modification involve significant changes, you should create an issue first and describe the error information and how to trigger the bug. Other developers will discuss with you and propose an proper solution.
-2. Posting a pull request after fixing the bug and adding corresponding unit test.
-
-**New Feature or Enhancement**
-
-1. If the modification involve significant changes, you should create an issue to discuss with our developers to propose an proper design.
-2. Post a Pull Request after implementing the new feature or enhancement and add corresponding unit test.
-
-**Document**
-
-You can directly post a pull request to fix documents. If you want to add a document, you should first create an issue to check if it is reasonable.
-
-### Pull Request Workflow
-
-If you're not familiar with Pull Request, don't worry! The following guidance will tell you how to create a Pull Request step by step. If you want to dive into the develop mode of Pull Request, you can refer to the [official documents](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)
-
-#### 1. Fork and clone
-
-If you are posting a pull request for the first time, you should fork the OpenMMLab repositories by clicking the **Fork** button in the top right corner of the GitHub page, and the forked repositories will appear under your GitHub profile.
-
-<img src="https://user-images.githubusercontent.com/57566630/167305749-43c7f4e9-449b-4e98-ade5-0c9276d5c9ce.png" width="1200">
-
-Then, you can clone the repositories to local:
-
-```shell
-git clone git@github.com:{username}/mmagic.git
-```
-
-After that, you should ddd official repository as the upstream repository
-
-```bash
-git remote add upstream git@github.com:open-mmlab/mmagic
-```
-
-Check whether remote repository has been added successfully by `git remote -v`
-
-```bash
-origin	git@github.com:{username}/mmagic.git (fetch)
-origin	git@github.com:{username}/mmagic.git (push)
-upstream	git@github.com:open-mmlab/mmagic (fetch)
-upstream	git@github.com:open-mmlab/mmagic (push)
-```
-
-```{note}
-Here's a brief introduction to origin and upstream. When we use "git clone", we create an "origin" remote by default, which points to the repository cloned from. As for "upstream", we add it ourselves to point to the target repository. Of course, if you don't like the name "upstream", you could name it as you wish. Usually, we'll push the code to "origin". If the pushed code conflicts with the latest code in official("upstream"), we should pull the latest code from upstream to resolve the conflicts, and then push to "origin" again. The posted Pull Request will be updated automatically.
-```
-
-#### 2. Configure pre-commit
-
-You should configure [pre-commit](https://pre-commit.com/#intro) in the local development environment to make sure the code style matches that of OpenMMLab. **Note**: The following code should be executed under the mmagic directory.
-
-```shell
-pip install -U pre-commit
-pre-commit install
-```
-
-Check that pre-commit is configured successfully, and install the hooks defined in `.pre-commit-config.yaml`.
-
-```shell
-pre-commit run --all-files
-```
-
-<img src="https://user-images.githubusercontent.com/57566630/173660750-3df20a63-cb66-4d33-a986-1f643f1d8aaf.png" width="1200">
-
-<img src="https://user-images.githubusercontent.com/57566630/202368856-0465a90d-8fce-4345-918e-67b8b9c82614.png" width="1200">
-
-```{note}
-Chinese users may fail to download the pre-commit hooks due to the network issue. In this case, you could download these hooks from gitee by setting the .pre-commit-config-zh-cn.yaml
-
-pre-commit install -c .pre-commit-config-zh-cn.yaml
-pre-commit run --all-files -c .pre-commit-config-zh-cn.yaml
-```
-
-If the installation process is interrupted, you can repeatedly run `pre-commit run ... ` to continue the installation.
-
-If the code does not conform to the code style specification, pre-commit will raise a warning and  fixes some of the errors automatically.
-
-<img src="https://user-images.githubusercontent.com/57566630/202369176-67642454-0025-4023-a095-263529107aa3.png" width="1200">
-
-If we want to commit our code bypassing the pre-commit hook, we can use the `--no-verify` option(**only for temporarily commit**.
-
-```shell
-git commit -m "xxx" --no-verify
-```
-
-#### 3. Create a development branch
-
-After configuring the pre-commit, we should create a branch based on the main branch to develop the new feature or fix the bug. The proposed branch name is `username/pr_name`
-
-```shell
-git checkout -b yhc/refactor_contributing_doc
-```
-
-In subsequent development, if the main branch of the local repository is behind the main branch of "upstream", we need to pull the upstream for synchronization, and then execute the above command:
-
-```shell
-git pull upstream main
-```
-
-#### 4. Commit the code and pass the unit test
-
-- mmagic introduces mypy to do static type checking to increase the robustness of the code. Therefore, we need to add Type Hints to our code and pass the mypy check. If you are not familiar with Type Hints, you can refer to [this tutorial](https://docs.python.org/3/library/typing.html).
-
-- The committed code should pass through the unit test
-
-  ```shell
-  # Pass all unit tests
-  pytest tests
-
-  # Pass the unit test of runner
-  pytest tests/test_runner/test_runner.py
-  ```
-
-  If the unit test fails for lack of dependencies, you can install the dependencies referring to the [guidance](#unit-test)
-
-- If the documents are modified/added, we should check the rendering result referring to [guidance](#document-rendering)
-
-#### 5. Push the code to remote
-
-We could push the local commits to remote after passing through the check of unit test and pre-commit. You can associate the local branch with remote branch by adding `-u` option.
-
-```shell
-git push -u origin {branch_name}
-```
-
-This will allow you to use the `git push` command to push code directly next time, without having to specify a branch or the remote repository.
-
-#### 6. Create a Pull Request
-
-(1) Create a pull request in GitHub's Pull request interface
-
-<img src="https://user-images.githubusercontent.com/57566630/201533288-516f7ac4-0b14-4dc8-afbd-912475c368b5.png" width="1200">
-
-(2) Modify the PR description according to the guidelines so that other developers can better understand your changes
-
-<img src="https://user-images.githubusercontent.com/57566630/202242953-c91a18ff-e388-4ff9-8591-5fae0ead6c1e.png" width="1200">
-
-Find more details about Pull Request description in [pull request guidelines](#pr-specs).
-
-**note**
-
-(a) The Pull Request description should contain the reason for the change, the content of the change, and the impact of the change, and be associated with the relevant Issue (see [documentation](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue)
-
-(b) If it is your first contribution, please sign the CLA
-
-<img src="https://user-images.githubusercontent.com/57566630/167307569-a794b967-6e28-4eac-a942-00deb657815f.png" width="1200">
-
-(c) Check whether the Pull Request pass through the CI
-
-<img src="https://user-images.githubusercontent.com/57566630/167307490-f9ebf9fa-63c0-4d83-8ba1-081ea169eb3a.png" width="1200">
-
-mmagic will run unit test for the posted Pull Request on different platforms (Linux, Window, Mac), based on different versions of Python, PyTorch, CUDA to make sure the code is correct. We can see the specific test information by clicking `Details` in the above image so that we can modify the code.
-
-(3) If the Pull Request passes the CI, then you can wait for the review from other developers. You'll modify the code based on the reviewer's comments, and repeat the steps [4](#4-commit-the-code-and-pass-the-unit-test)-[5](#5-push-the-code-to-remote) until all reviewers approve it. Then, we will merge it ASAP.
-
-<img src="https://user-images.githubusercontent.com/57566630/202145400-cc2cd8c4-10b0-472f-ba37-07e6f50acc67.png" width="1200">
-
-#### 7. Resolve conflicts
-
-If your local branch conflicts with the latest main branch of "upstream", you'll need to resolove them. There are two ways to do this:
-
-```shell
-git fetch --all --prune
-git rebase upstream/main
-```
-
-or
-
-```shell
-git fetch --all --prune
-git merge upstream/main
-```
-
-If you are very good at handling conflicts, then you can use rebase to resolve conflicts, as this will keep your commit logs tidy. If you are not familiar with `rebase`, then you can use `merge` to resolve conflicts.
-
-### Guidance
-
-#### Unit test
-
-We should make sure the committed code will not decrease the coverage of unit test, we could run the following command to check the coverage of unit test:
-
-```shell
-python -m coverage run -m pytest /path/to/test_file
-python -m coverage html
-# check file in htmlcov/index.html
-```
-
-#### Document rendering
-
-If the documents are modified/added, we should check the rendering result. We could install the dependencies and run the following command to render the documents and check the results:
-
-```shell
-pip install -r requirements/docs.txt
-cd docs/zh_cn/
-# or docs/en
-make html
-# check file in ./docs/zh_cn/_build/html/index.html
-```
-
-### Code style
-
-#### Python
-
-We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
-
-We use the following tools for linting and formatting:
-
-- [flake8](https://github.com/PyCQA/flake8): A wrapper around some linter tools.
-- [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports.
-- [yapf](https://github.com/google/yapf): A formatter for Python files.
-- [codespell](https://github.com/codespell-project/codespell): A Python utility to fix common misspellings in text files.
-- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
-- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
-
-Style configurations of yapf and isort can be found in [setup.cfg](../../../setup.cfg).
-
-We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
-fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
-The config for a pre-commit hook is stored in [.pre-commit-config](../../../.pre-commit-config.yaml).
-
-#### C++ and CUDA
-
-We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
-
-### PR Specs
-
-1. Use [pre-commit](https://pre-commit.com) hook to avoid issues of code style
-
-2. One short-time branch should be matched with only one PR
-
-3. Accomplish a detailed change in one PR. Avoid large PR
-
-   - Bad: Support Faster R-CNN
-   - Acceptable: Add a box head to Faster R-CNN
-   - Good: Add a parameter to box head to support custom conv-layer number
-
-4. Provide clear and significant commit message
-
-5. Provide clear and meaningful PR description
-
-   - Task name should be clarified in title. The general format is: \[Prefix\] Short description of the PR (Suffix)
-   - Prefix: add new feature \[Feature\], fix bug \[Fix\], related to documents \[Docs\], in developing \[WIP\] (which will not be reviewed temporarily)
-   - Introduce main changes, results and influences on other modules in short description
-   - Associate related issues and pull requests with a milestone
diff --git a/docs/en/community/projects.md b/docs/en/community/projects.md
deleted file mode 100644
index 431fbcbd03..0000000000
--- a/docs/en/community/projects.md
+++ /dev/null
@@ -1,67 +0,0 @@
-# MMagic projects
-
-Welcome to the MMagic community!
-The MMagic ecosystem consists of tutorials, libraries, and projects from a broad set of researchers in academia and industry, ML and application engineers.
-The goal of this ecosystem is to support, accelerate, and aid in your exploration with MMagic for AIGC such as image, video, 3D content generation, editing and processing.
-
-Here are a few projects that are built upon MMagic. They are examples of how to use MMagic as a library, to make your projects more maintainable.
-Please find more projects in [MMagic Ecosystem](https://openmmlab.com/ecosystem).
-
-## Show your projects on OpenMMLab Ecosystem
-
-You can submit your project so that it can be shown on the homepage of [OpenMMLab](https://openmmlab.com/ecosystem).
-
-## Add example projects to MMagic
-
-Here is an [example project](../../../projects/example_project) about how to add your projects to MMagic.
-You can copy and create your own project from the [example project](../../../projects/example_project).
-
-We also provide some documentation listed below for your reference:
-
-- [Contribution Guide](https://mmagic.readthedocs.io/en/latest/community/contributing.html)
-
-  The guides for new contributors about how to add your projects to MMagic.
-
-- [New Model Guide](https://mmagic.readthedocs.io/en/latest/howto/models.html)
-
-  The documentation of adding new models.
-
-- [Discussions](https://github.com/open-mmlab/mmagic/discussions)
-
-  Welcome to start a discussion!
-
-## Projects of libraries and toolboxes
-
-- [PowerVQE](https://github.com/ryanxingql/powervqe): Open framework for quality enhancement of compressed videos based on PyTorch and MMagic.
-
-- [VR-Baseline](https://github.com/linjing7/VR-Baseline): Video Restoration Toolbox.
-
-- [Derain-Toolbox](https://github.com/biubiubiiu/derain-toolbox): Single Image Deraining Toolbox and Benchmark
-
-## Projects of research papers
-
-- [Towards Interpretable Video Super-Resolution via Alternating Optimization, ECCV 2022](https://arxiv.org/abs/2207.10765)[\[github\]](https://github.com/caojiezhang/DAVSR)
-
-- [SepLUT:Separable Image-adaptive Lookup Tables for Real-time Image Enhancement, ECCV 2022](https://arxiv.org/abs/2207.08351)[\[github\]](https://github.com/ImCharlesY/SepLUT)
-
-- [TTVSR: Learning Trajectory-Aware Transformer for Video Super-Resolution, CVPR 2022](https://arxiv.org/abs/2204.04216)[\[github\]](https://github.com/researchmm/TTVSR)
-
-- [Arbitrary-Scale Image Synthesis, CVPR 2022](https://arxiv.org/pdf/2204.02273.pdf)[\[github\]](https://github.com/vglsd/ScaleParty)
-
-- [Investigating Tradeoffs in Real-World Video Super-Resolution(RealBasicVSR), CVPR 2022](https://arxiv.org/abs/2111.12704)[\[github\]](https://github.com/ckkelvinchan/RealBasicVSR)
-
-- [BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment, CVPR 2022](https://arxiv.org/abs/2104.13371)[\[github\]](https://github.com/ckkelvinchan/BasicVSR_PlusPlus)
-
-- [Multi-Scale Memory-Based Video Deblurring, CVPR 2022](https://arxiv.org/abs/2204.02977)[\[github\]](https://github.com/jibo27/MemDeblur)
-
-- [AdaInt:Learning Adaptive Intervals for 3D Lookup Tables on Real-time Image Enhancement, CVPR 2022](https://arxiv.org/abs/2204.13983)[\[github\]](https://github.com/ImCharlesY/AdaInt)
-
-- [A New Dataset and Transformer for Stereoscopic Video Super-Resolution, CVPRW 2022](https://openaccess.thecvf.com/content/CVPR2022W/NTIRE/papers/Imani_A_New_Dataset_and_Transformer_for_Stereoscopic_Video_Super-Resolution_CVPRW_2022_paper.pdf)[\[github\]](https://github.com/H-deep/Trans-SVSR)
-
-- [Liquid warping GAN with attention: A unified framework for human image synthesis, TPAMI 2021](https://arxiv.org/pdf/2011.09055.pdf)[\[github\]](https://github.com/iPERDance/iPERCore)
-
-- [BasicVSR:The Search for Essential Components in Video Super-Resolution and Beyond, CVPR 2021](https://arxiv.org/abs/2012.02181)[\[github\]](https://github.com/ckkelvinchan/BasicVSR-IconVSR)
-
-- [GLEAN:Generative Latent Bank for Large-Factor Image Super-Resolution, CVPR 2021](https://arxiv.org/abs/2012.00739)[\[github\]](https://github.com/ckkelvinchan/GLEAN)
-
-- [DAN:Unfolding the Alternating Optimization for Blind Super Resolution, NeurIPS 2020](https://arxiv.org/abs/2010.02631v4)[\[github\]](https://github.com/AlexZou14/DAN-Basd-on-Openmmlab)
diff --git a/docs/en/conf.py b/docs/en/conf.py
deleted file mode 100644
index 2465fe4512..0000000000
--- a/docs/en/conf.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import subprocess
-import sys
-
-import pytorch_sphinx_theme
-
-sys.path.insert(0, os.path.abspath('../..'))
-
-# -- Project information -----------------------------------------------------
-
-project = 'MMagic'
-copyright = '2023, MMagic Authors'
-author = 'MMagic Authors'
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
-    'sphinx.ext.intersphinx',
-    'sphinx.ext.napoleon',
-    'sphinx.ext.viewcode',
-    'sphinx.ext.autosectionlabel',
-    'sphinx_markdown_tables',
-    'sphinx_copybutton',
-    'sphinx_tabs.tabs',
-    'myst_parser',
-]
-
-extensions.append('notfound.extension')  # enable customizing not-found page
-
-extensions.append('autoapi.extension')
-autoapi_type = 'python'
-autoapi_dirs = ['../../mmagic']
-autoapi_add_toctree_entry = False
-autoapi_template_dir = '_templates'
-# autoapi_options = ['members', 'undoc-members', 'show-module-summary']
-
-# # Core library for html generation from docstrings
-# extensions.append('sphinx.ext.autodoc')
-# extensions.append('sphinx.ext.autodoc.typehints')
-# # Enable 'expensive' imports for sphinx_autodoc_typehints
-# set_type_checking_flag = True
-# # Sphinx-native method. Not as good as sphinx_autodoc_typehints
-# autodoc_typehints = "description"
-
-# extensions.append('sphinx.ext.autosummary') # Create neat summary tables
-# autosummary_generate = True  # Turn on sphinx.ext.autosummary
-# # Add __init__ doc (ie. params) to class summaries
-# autoclass_content = 'both'
-# autodoc_skip_member = []
-# # If no docstring, inherit from base class
-# autodoc_inherit_docstrings = True
-
-autodoc_mock_imports = [
-    'mmagic.version', 'mmcv._ext', 'mmcv.ops.ModulatedDeformConv2d',
-    'mmcv.ops.modulated_deform_conv2d', 'clip', 'resize_right', 'pandas'
-]
-
-source_suffix = {
-    '.rst': 'restructuredtext',
-    '.md': 'markdown',
-}
-
-# # Remove 'view source code' from top of page (for html, not python)
-# html_show_sourcelink = False
-# nbsphinx_allow_errors = True  # Continue through Jupyter errors
-# add_module_names = False  # Remove namespaces from class/method signatures
-
-# Ignore >>> when copying code
-copybutton_prompt_text = r'>>> |\.\.\. '
-copybutton_prompt_is_regexp = True
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-#
-# html_theme = 'sphinx_rtd_theme'
-html_theme = 'pytorch_sphinx_theme'
-html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
-
-html_theme_options = {
-    'menu': [
-        {
-            'name': 'GitHub',
-            'url': 'https://github.com/open-mmlab/mmagic',
-        },
-        {
-            'name':
-            'Version',
-            'children': [
-                {
-                    'name': 'MMagic 1.x',
-                    'url': 'https://mmagic.readthedocs.io/en/latest/',
-                    'description': 'Main branch'
-                },
-                {
-                    'name': 'MMEditing 0.x',
-                    'url': 'https://mmagic.readthedocs.io/en/0.x/',
-                    'description': '0.x branch',
-                },
-            ],
-            'active':
-            True,
-        },
-    ],
-    'menu_lang':
-    'en',
-}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-html_css_files = ['css/readthedocs.css']
-
-myst_enable_extensions = ['colon_fence']
-myst_heading_anchors = 3
-
-language = 'en'
-
-# The master toctree document.
-root_doc = 'index'
-notfound_template = '404.html'
-
-
-def builder_inited_handler(app):
-    subprocess.run(['python', './.dev_scripts/update_model_zoo.py'])
-    subprocess.run(['python', './.dev_scripts/update_dataset_zoo.py'])
-
-
-def skip_member(app, what, name, obj, skip, options):
-    if what == 'package' or what == 'module':
-        skip = True
-    return skip
-
-
-def viewcode_follow_imported(app, modname, attribute):
-    fullname = f'{modname}.{attribute}'
-    all_objects = app.env.autoapi_all_objects
-    if fullname not in all_objects:
-        return None
-
-    if all_objects[fullname].obj.get('type') == 'method':
-        fullname = fullname[:fullname.rfind('.')]
-        attribute = attribute[:attribute.rfind('.')]
-    while all_objects[fullname].obj.get('original_path', '') != '':
-        fullname = all_objects[fullname].obj.get('original_path')
-
-    orig_path = fullname
-    if orig_path.endswith(attribute):
-        return orig_path[:-len(attribute) - 1]
-
-    return modname
-
-
-def setup(app):
-    app.connect('builder-inited', builder_inited_handler)
-    app.connect('autoapi-skip-member', skip_member)
-    if 'viewcode-follow-imported' in app.events.events:
-        app.connect(
-            'viewcode-follow-imported', viewcode_follow_imported, priority=0)
diff --git a/docs/en/device/npu.md b/docs/en/device/npu.md
deleted file mode 100644
index fa3a8c1405..0000000000
--- a/docs/en/device/npu.md
+++ /dev/null
@@ -1,29 +0,0 @@
-# NPU (HUAWEI Ascend)
-
-## Usage
-
-Please refer to the [building documentation of MMCV](https://mmcv.readthedocs.io/en/latest/get_started/build.html#build-mmcv-full-on-ascend-npu-machine) to install MMCV and [mmengine](https://mmengine.readthedocs.io/en/latest/get_started/installation.html#build-from-source) on NPU devices.
-
-Here we use 8 NPUs on your computer to train the model with the following command:
-
-```shell
-bash tools/dist_train.sh configs/edsr/edsr_x2c64b16_1xb16-300k_div2k.py 8
-```
-
-Also, you can use only one NPU to train the model with the following command:
-
-```shell
-python tools/train.py configs/edsr/edsr_x2c64b16_1xb16-300k_div2k.py
-```
-
-## Models Results
-
-|                                           Model                                            | Dataset | PSNR  | SSIM | Download                                                                                       |
-| :----------------------------------------------------------------------------------------: | ------- | :---: | :--- | :--------------------------------------------------------------------------------------------- |
-| [edsr_x2c64b16_1x16_300k_div2k](https://github.com/open-mmlab/mmagic/blob/main/configs/edsr/edsr_x2c64b16_1xb16-300k_div2k.py) | DIV2K   | 35.83 | 0.94 | [log](https://download.openmmlab.com/mmediting/device/npu/edsr/edsr_x2c64b16_1xb16-300k_div2k.log) |
-
-**Notes:**
-
-- If not specially marked, the results on NPU with amp are the basically same as those on the GPU with FP32.
-
-**All above models are provided by Huawei Ascend group.**
diff --git a/docs/en/docutils.conf b/docs/en/docutils.conf
deleted file mode 100644
index 0c00c84688..0000000000
--- a/docs/en/docutils.conf
+++ /dev/null
@@ -1,2 +0,0 @@
-[html writers]
-table_style: colwidths-auto
diff --git a/docs/en/faq.md b/docs/en/faq.md
deleted file mode 100644
index 088f9c28fb..0000000000
--- a/docs/en/faq.md
+++ /dev/null
@@ -1,48 +0,0 @@
-# Frequently asked questions
-
-We list some common troubles faced by many users and their corresponding
-solutions here. Feel free to enrich the list if you find any frequent issues
-and have ways to help others to solve them. If the contents here do not cover
-your issue, please create an issue using the
-[provided templates](https://github.com/open-mmlab/mmagic/issues/new/choose)
-and make sure you fill in all required information in the template.
-
-## FAQ
-
-**Q1**: “xxx: ‘yyy is not in the zzz registry’”.
-
-**A1**: The registry mechanism will be triggered only when the file of the module is imported. So you need to import that file somewhere.
-
-**Q2**: What's the folder structure of xxx dataset?
-
-**A2**: You can make sure the folder structure is correct following tutorials of [dataset preparation](user_guides/dataset_prepare.md).
-
-**Q3**: How to use LMDB data to train the model?
-
-**A3**:  You can use scripts in `tools/data` to make LMDB files. More details are shown in tutorials of [dataset preparation](user_guides/dataset_prepare.md).
-
-**Q4**: Why `MMCV==xxx is used but incompatible` is raised when import I try to import `mmgen`?
-
-**A4**:
-This is because the version of MMCV and MMGeneration are incompatible. Compatible MMGeneration and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
-
-| MMGeneration version |   MMCV version   |
-| :------------------: | :--------------: |
-|        master        | mmcv-full>=2.0.0 |
-
-Note: You need to run `pip uninstall mmcv` first if you have mmcv installed.
-If mmcv and mmcv-full are both installed, there will be `ModuleNotFoundError`.
-
-**Q5**: How can I ignore some fields in the base configs?
-
-**A5**:
-Sometimes, you may set `_delete_=True` to ignore some of fields in base configs.
-You may refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/config.md#delete-key-in-dict) for simple illustration.
-
-You may have a careful look at [this tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/config.md) for better understanding of this feature.
-
-**Q6**:: How can I use intermediate variables in configs?
-
-**A6**:
-Some intermediate variables are used in the config files, like `train_pipeline`/`test_pipeline` in datasets.
-It's worth noting that when modifying intermediate variables in the children configs, users need to pass the intermediate variables into corresponding fields again.
diff --git a/docs/en/get_started/install.md b/docs/en/get_started/install.md
deleted file mode 100644
index 1fa9e10109..0000000000
--- a/docs/en/get_started/install.md
+++ /dev/null
@@ -1,197 +0,0 @@
-# Installation
-
-In this section, you will know about:
-
-- [Installation](#installation)
-  - [Installation](#installation-1)
-    - [Prerequisites](#prerequisites)
-    - [Best practices](#best-practices)
-    - [Customize installation](#customize-installation)
-      - [CUDA Version](#cuda-version)
-      - [Install MMCV without MIM](#install-mmcv-without-mim)
-      - [Using MMagic with Docker](#using-mmagic-with-docker)
-      - [Trouble shooting](#trouble-shooting)
-    - [Developing with multiple MMagic versions](#developing-with-multiple-mmagic-versions)
-
-## Installation
-
-We recommend that users follow our [Best practices](#best-practices) to install MMagic.
-However, the whole process is highly customizable. See [Customize installation](#customize-installation) section for more information.
-
-### Prerequisites
-
-In this section, we demonstrate how to prepare an environment with PyTorch.
-
-MMagic works on Linux, Windows, and macOS. It requires:
-
-- Python >= 3.7
-- [PyTorch](https://pytorch.org/) >= 1.8
-- [MMCV](https://github.com/open-mmlab/mmcv) >= 2.0.0
-
->
-
-If you are experienced with PyTorch and have already installed it,
-just skip this part and jump to the [next section](#best-practices). Otherwise, you can follow these steps for the preparation.
-
-**Step 0.**
-Download and install Miniconda from [official website](https://docs.conda.io/en/latest/miniconda.html).
-
-**Step 1.**
-Create a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html#) and activate it
-
-```shell
-conda create --name mmagic python=3.8 -y
-conda activate mmagic
-```
-
-**Step 2.**
-Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/), e.g.
-
-- On GPU platforms:
-
-  ```shell
-  conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
-  ```
-
-- On CPU platforms:
-
-  ```shell
-  conda install pytorch=1.10 torchvision cpuonly -c pytorch
-  ```
-
-### Best practices
-
-**Step 0.** Install [MMCV](https://github.com/open-mmlab/mmcv) using [MIM](https://github.com/open-mmlab/mim).
-
-```shell
-pip install -U openmim
-mim install 'mmcv>=2.0.0'
-```
-
-**Step 1.** Install [MMEngine](https://github.com/open-mmlab/mmengine).
-
-```shell
-mim install 'mmengine'
-```
-
-Or
-
-```shell
-pip install mmengine
-```
-
-Or
-
-```shell
-pip install git+https://github.com/open-mmlab/mmengine.git
-```
-
-**Step 2.** Install MMagic.
-
-```shell
-mim install 'mmagic'
-```
-
-Or
-
-```shell
-pip install mmagic
-```
-
-Or install [MMagic](https://github.com/open-mmlab/mmagic) from the source code.
-
-```shell
-git clone https://github.com/open-mmlab/mmagic.git
-cd mmagic
-pip3 install -e . -v
-```
-
-**Step 5.**
-Verify MMagic has been successfully installed.
-
-```shell
-cd ~
-python -c "import mmagic; print(mmagic.__version__)"
-# Example output: 1.0.0
-```
-
-The installation is successful if the version number is output correctly.
-
-```{note}
-You may be curious about what `-e .` means when supplied with `pip install`.
-Here is the description:
-
-- `-e` means [editable mode](https://pip.pypa.io/en/latest/cli/pip_install/#cmdoption-e).
-  When `import mmagic`, modules under the cloned directory are imported.
-  If `pip install` without `-e`, pip will copy cloned codes to somewhere like `lib/python/site-package`.
-  Consequently, modified code under the cloned directory takes no effect unless `pip install` again.
-  Thus, `pip install` with `-e` is particularly convenient for developers. If some codes are modified, new codes will be imported next time without reinstallation.
-- `.` means code in this directory
-
-You can also use `pip install -e .[all]`, which will install more dependencies, especially for pre-commit hooks and unittests.
-```
-
-### Customize installation
-
-#### CUDA Version
-
-When installing PyTorch, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations:
-
-- For Ampere-based NVIDIA GPUs, such as GeForce 30 series and NVIDIA A100, CUDA 11 is a must.
-- For older NVIDIA GPUs, CUDA 11 is backward compatible, but CUDA 10.2 offers better compatibility and is more lightweight.
-
-Please make sure the GPU driver satisfies the minimum version requirements.
-See [this table](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions) for more information.
-
-**note**
-Installing CUDA runtime libraries is enough if you follow our best practices,
-because no CUDA code will be compiled locally.
-However, if you hope to compile MMCV from source or develop other CUDA operators,
-you need to install the complete CUDA toolkit from NVIDIA's [website](https://developer.nvidia.com/cuda-downloads),
-and its version should match the CUDA version of PyTorch. i.e., the specified version of cudatoolkit in `conda install` command.
-
-#### Install MMCV without MIM
-
-MMCV contains C++ and CUDA extensions, thus depending on PyTorch in a complex way.
-MIM solves such dependencies automatically and makes the installation easier. However, it is not a must.
-
-To install MMCV with pip instead of MIM, please follow [MMCV installation guides](https://mmcv.readthedocs.io/en/latest/get_started/installation.html).
-This requires manually specifying a find-url based on PyTorch version and its CUDA version.
-
-For example, the following command install mmcv-full built for PyTorch 1.10.x and CUDA 11.3.
-
-```shell
-pip install 'mmcv>=2.0.0' -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
-```
-
-#### Using MMagic with Docker
-
-We provide a [Dockerfile](https://github.com/open-mmlab/mmagic/blob/main/docker/Dockerfile) to build an image.
-Ensure that your [docker version](https://docs.docker.com/engine/install/) >=19.03.
-
-```shell
-# build an image with PyTorch 1.8, CUDA 11.1
-# If you prefer other versions, just modified the Dockerfile
-docker build -t mmagic docker/
-```
-
-Run it with
-
-```shell
-docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmagic/data mmagic
-```
-
-#### Trouble shooting
-
-If you have some issues during the installation, please first view the [FAQ](../faq.md) page.
-You may [open an issue](https://github.com/open-mmlab/mmagic/issues/new/choose) on GitHub if no solution is found.
-
-### Developing with multiple MMagic versions
-
-The train and test scripts already modify the `PYTHONPATH` to ensure the script uses the `MMagic` in the current directory.
-
-To use the default MMagic installed in the environment rather than that you are working with, you can remove the following line in those scripts
-
-```shell
-PYTHONPATH="$(dirname $0)/..":$PYTHONPATH
-```
diff --git a/docs/en/get_started/overview.md b/docs/en/get_started/overview.md
deleted file mode 100644
index a00ab52981..0000000000
--- a/docs/en/get_started/overview.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# Overview
-
-Welcome to MMagic! In this section, you will know about
-
-- [Overview](#overview)
-  - [What is MMagic?](#what-is-mmagic)
-  - [Why should I use MMagic?](#why-should-i-use-mmagic)
-  - [Get started](#get-started)
-  - [User guides](#user-guides)
-    - [Advanced guides](#advanced-guides)
-    - [How to](#how-to)
-
-## What is MMagic?
-
-MMagic (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation) is an open-source AIGC toolbox for professional AI researchers and machine learning engineers to explore image and video processing, editing and generation.
-
-MMagic allows researchers and engineers to use pre-trained state-of-the-art models, train and develop new customized models easily.
-
-MMagic supports various foundamental generative models, including:
-
-- Unconditional Generative Adversarial Networks (GANs)
-- Conditional Generative Adversarial Networks (GANs)
-- Internal Learning
-- Diffusion Models
-- And many other generative models are coming soon!
-
-MMagic supports various applications, including:
-
-- Text-to-Image
-- Image-to-image translation
-- 3D-aware generation
-- Image super-resolution
-- Video super-resolution
-- Video frame interpolation
-- Image inpainting
-- Image matting
-- Image restoration
-- Image colorization
-- Image generation
-- And many other applications are coming soon!
-
-<div align=center>
-    <video width="100%" controls>
-        <source src="https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4" type="video/mp4">
-        <object data="https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-b610-35e3d257765c.mp4" width="100%">
-        </object>
-    </video>
-</div>
-</br>
-
-## Why should I use MMagic?
-
-- **State of the Art Models**
-
-  MMagic provides state-of-the-art generative models to process, edit and synthesize images and videos.
-
-- **Powerful and Popular Applications**
-
-  MMagic supports popular and contemporary image restoration, text-to-image, 3D-aware generation, inpainting, matting, super-resolution and generation applications. Specifically, MMagic supports fine-tuning for stable diffusion and many exciting diffusion's application such as ControlNet Animation with SAM. MMagic also supports GAN interpolation, GAN projection, GAN manipulations and many other popular GAN’s applications. It’s time to begin your AIGC exploration journey!
-
-- **Efficient Framework**
-
-  By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.
-
-## Get started
-
-For installation instructions, please see [Installation](install.md).
-
-## User guides
-
-For beginners, we suggest learning the basic usage of MMagic from [user_guides](../user_guides/config.md).
-
-### Advanced guides
-
-For users who are familiar with MMagic, you may want to learn the design of MMagic, as well as how to extend the repo, how to use multiple repos and other advanced usages, please refer to [advanced_guides](../advanced_guides/evaluator.md).
-
-### How to
-
-For users who want to use MMagic to do something, please refer to [How to](../howto/models.md).
diff --git a/docs/en/get_started/quick_run.md b/docs/en/get_started/quick_run.md
deleted file mode 100644
index 30e92398a0..0000000000
--- a/docs/en/get_started/quick_run.md
+++ /dev/null
@@ -1,36 +0,0 @@
-# Quick run
-
-After installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!
-
-```python
-from mmagic.apis import MMagicInferencer
-sd_inferencer = MMagicInferencer(model_name='stable_diffusion')
-text_prompts = 'A panda is having dinner at KFC'
-result_out_dir = 'output/sd_res.png'
-sd_inferencer.infer(text=text_prompts, result_out_dir=result_out_dir)
-```
-
-Or you can just run the following command.
-
-```bash
-python demo/mmagic_inference_demo.py \
-    --model-name stable_diffusion \
-    --text "A panda is having dinner at KFC" \
-    --result-out-dir ./output/sd_res.png
-```
-
-You will see a new image `sd_res.png` in folder `output/`, which contained generated samples.
-
-What's more, if you want to make these photos much more clear,
-you only need several lines of codes for image super-resolution by MMagic!
-
-```python
-from mmagic.apis import MMagicInferencer
-config = 'configs/esrgan/esrgan_x4c64b23g32_1xb16-400k_div2k.py'
-checkpoint = 'https://download.openmmlab.com/mmediting/restorers/esrgan/esrgan_x4c64b23g32_1x16_400k_div2k_20200508-f8ccaf3b.pth'
-img_path = 'tests/data/image/lq/baboon_x4.png'
-editor = MMagicInferencer('esrgan', model_config=config, model_ckpt=checkpoint)
-output = editor.infer(img=img_path,result_out_dir='output.png')
-```
-
-Now, you can check your fancy photos in `output.png`.
diff --git a/docs/en/howto/dataset.md b/docs/en/howto/dataset.md
deleted file mode 100644
index 3b36fe6c2c..0000000000
--- a/docs/en/howto/dataset.md
+++ /dev/null
@@ -1,586 +0,0 @@
-# How to prepare your own datasets
-
-In this document, we will introduce the design of each datasets in MMagic and how users can design their own dataset.
-
-- [How to prepare your own datasets](#how-to-prepare-your-own-datasets)
-  - [Supported Data Format](#supported-data-format)
-    - [BasicImageDataset](#basicimagedataset)
-    - [BasicFramesDataset](#basicframesdataset)
-    - [BasicConditonalDataset](#basicconditonaldataset)
-      - [1. Annotation file read by line (e.g., txt)](#1-annotation-file-read-by-line-eg-txt)
-      - [2. Dict-based annotation file (e.g., json):](#2-dict-based-annotation-file-eg-json)
-      - [3. Folder-based annotation (no annotation file need):](#3-folder-based-annotation-no-annotation-file-need)
-    - [ImageNet Dataset and CIFAR10 Dataset](#imagenet-dataset-and-cifar10-dataset)
-    - [AdobeComp1kDataset](#adobecomp1kdataset)
-    - [GrowScaleImgDataset](#growscaleimgdataset)
-    - [SinGANDataset](#singandataset)
-    - [PairedImageDataset](#pairedimagedataset)
-    - [UnpairedImageDataset](#unpairedimagedataset)
-  - [Design a new dataset](#design-a-new-dataset)
-    - [Repeat dataset](#repeat-dataset)
-
-## Supported Data Format
-
-In MMagic, all datasets are inherited from `BaseDataset`.
-Each dataset load the list of data info (e.g., data path) by `load_data_list`.
-In `__getitem__`, `prepare_data` is called to get the preprocessed data.
-In `prepare_data`, data loading pipeline consists of the following steps:
-
-1. fetch the data info by passed index, implemented by `get_data_info`
-2. apply data transforms to the data, implemented by `pipeline`
-
-### BasicImageDataset
-
-**BasicImageDataset** `mmagic.datasets.BasicImageDataset`
-General image dataset designed for low-level vision tasks with image, such as image super-resolution, inpainting and unconditional image generation. The annotation file is optional.
-
-If use annotation file, the annotation format can be shown as follows.
-
-```bash
-   Case 1 (CelebA-HQ):
-
-       000001.png
-       000002.png
-
-   Case 2 (DIV2K):
-
-       0001_s001.png (480,480,3)
-       0001_s002.png (480,480,3)
-       0001_s003.png (480,480,3)
-       0002_s001.png (480,480,3)
-       0002_s002.png (480,480,3)
-
-   Case 3 (Vimeo90k):
-
-       00001/0266 (256, 448, 3)
-       00001/0268 (256, 448, 3)
-```
-
-Here we give several examples showing how to use `BasicImageDataset`. Assume the file structure as the following:
-
-```md
-mmagic (root)
-├── mmagic
-├── tools
-├── configs
-├── data
-│   ├── DIV2K
-│   │   ├── DIV2K_train_HR
-│   │   │   ├── image.png
-│   │   ├── DIV2K_train_LR_bicubic
-│   │   │   ├── X2
-│   │   │   ├── X3
-│   │   │   ├── X4
-│   │   │   │   ├── image_x4.png
-│   │   ├── DIV2K_valid_HR
-│   │   ├── DIV2K_valid_LR_bicubic
-│   │   │   ├── X2
-│   │   │   ├── X3
-│   │   │   ├── X4
-│   ├── places
-│   │   ├── test_set
-│   │   ├── train_set
-|   |   ├── meta
-|   |   |    ├── Places365_train.txt
-|   |   |    ├── Places365_val.txt
-|   ├── celebahq
-│   │   ├── imgs_1024
-
-```
-
-Case 1: Loading DIV2K dataset for training a SISR model.
-
-```python
-   dataset = BasicImageDataset(
-       ann_file='',
-       metainfo=dict(
-           dataset_type='div2k',
-           task_name='sisr'),
-       data_root='data/DIV2K',
-       data_prefix=dict(
-           gt='DIV2K_train_HR', img='DIV2K_train_LR_bicubic/X4'),
-       filename_tmpl=dict(img='{}_x4', gt='{}'),
-       pipeline=[])
-```
-
-Case 2: Loading places dataset for training an inpainting model.
-
-```python
-   dataset = BasicImageDataset(
-       ann_file='meta/Places365_train.txt',
-       metainfo=dict(
-           dataset_type='places365',
-           task_name='inpainting'),
-       data_root='data/places',
-       data_prefix=dict(gt='train_set'),
-       pipeline=[])
-```
-
-Case 3: Loading CelebA-HQ dataset for training an PGGAN.
-
-```python
-dataset = BasicImageDataset(
-        pipeline=[],
-        data_root='./data/celebahq/imgs_1024')
-```
-
-### BasicFramesDataset
-
-**BasicFramesDataset** `mmagic.datasets.BasicFramesDataset`
-General frames dataset designed for low-level vision tasks with frames, such as video super-resolution and video frame interpolation. The annotation file is optional.
-
-If use annotation file, the annotation format can be shown as follows.
-
-```bash
-Case 1 (Vid4):
-
-   calendar 41
-   city 34
-   foliage 49
-   walk 47
-
-Case 2 (REDS):
-
-   000/00000000.png (720, 1280, 3)
-   000/00000001.png (720, 1280, 3)
-
-Case 3 (Vimeo90k):
-
-   00001/0266 (256, 448, 3)
-   00001/0268 (256, 448, 3)
-```
-
-Assume the file structure as the following:
-
-```bash
-mmagic (root)
-├── mmagic
-├── tools
-├── configs
-├── data
-│   ├── Vid4
-│   │   ├── BIx4
-│   │   │   ├── city
-│   │   │   │   ├── img1.png
-│   │   ├── GT
-│   │   │   ├── city
-│   │   │   │   ├── img1.png
-│   │   ├── meta_info_Vid4_GT.txt
-│   ├── vimeo-triplet
-│   │   ├── sequences
-|   |   |   ├── 00001
-│   │   │   │   ├── 0389
-│   │   │   │   │   ├── img1.png
-│   │   │   │   │   ├── img2.png
-│   │   │   │   │   ├── img3.png
-│   │   ├── tri_trainlist.txt
-```
-
-Case 1: Loading Vid4 dataset for training a VSR model.
-
-```python
-dataset = BasicFramesDataset(
-    ann_file='meta_info_Vid4_GT.txt',
-    metainfo=dict(dataset_type='vid4', task_name='vsr'),
-    data_root='data/Vid4',
-    data_prefix=dict(img='BIx4', gt='GT'),
-    pipeline=[],
-    depth=2,
-    num_input_frames=5)
-```
-
-Case 2: Loading Vimeo90k dataset for training a VFI model.
-
-```python
-dataset = BasicFramesDataset(
-    ann_file='tri_trainlist.txt',
-    metainfo=dict(dataset_type='vimeo90k', task_name='vfi'),
-    data_root='data/vimeo-triplet',
-    data_prefix=dict(img='sequences', gt='sequences'),
-    pipeline=[],
-    depth=2,
-    load_frames_list=dict(
-        img=['img1.png', 'img3.png'], gt=['img2.png']))
-```
-
-### BasicConditonalDataset
-
-**BasicConditonalDataset** `mmagic.datasets.BasicConditonalDataset` is designed for conditional GANs (e.g., SAGAN, BigGAN). This dataset support load label for the annotation file. `BasicConditonalDataset` support three kinds of annotation as follow:
-
-#### 1. Annotation file read by line (e.g., txt)
-
-Sample files structure:
-
-```
-    data_prefix/
-    ├── folder_1
-    │   ├── xxx.png
-    │   ├── xxy.png
-    │   └── ...
-    └── folder_2
-        ├── 123.png
-        ├── nsdf3.png
-        └── ...
-```
-
-Sample annotation file (the first column is the image path and the second column is the index of category):
-
-```
-    folder_1/xxx.png 0
-    folder_1/xxy.png 1
-    folder_2/123.png 5
-    folder_2/nsdf3.png 3
-    ...
-```
-
-Config example for ImageNet dataset:
-
-```python
-dataset=dict(
-    type='BasicConditionalDataset,
-    data_root='./data/imagenet/',
-    ann_file='meta/train.txt',
-    data_prefix='train',
-    pipeline=train_pipeline),
-```
-
-#### 2. Dict-based annotation file (e.g., json):
-
-Sample files structure:
-
-```
-    data_prefix/
-    ├── folder_1
-    │   ├── xxx.png
-    │   ├── xxy.png
-    │   └── ...
-    └── folder_2
-        ├── 123.png
-        ├── nsdf3.png
-        └── ...
-```
-
-Sample annotation file (the key is the image path and the value column
-is the label):
-
-```
-    {
-        "folder_1/xxx.png": [1, 2, 3, 4],
-        "folder_1/xxy.png": [2, 4, 1, 0],
-        "folder_2/123.png": [0, 9, 8, 1],
-        "folder_2/nsdf3.png", [1, 0, 0, 2],
-        ...
-    }
-```
-
-Config example for EG3D (shapenet-car) dataset:
-
-```python
-dataset = dict(
-    type='BasicConditionalDataset',
-    data_root='./data/eg3d/shapenet-car',
-    ann_file='annotation.json',
-    pipeline=train_pipeline)
-```
-
-In this kind of annotation, labels can be any type and not restricted to an index.
-
-#### 3. Folder-based annotation (no annotation file need):
-
-Sample files structure:
-
-```
-    data_prefix/
-    ├── class_x
-    │   ├── xxx.png
-    │   ├── xxy.png
-    │   └── ...
-    │       └── xxz.png
-    └── class_y
-        ├── 123.png
-        ├── nsdf3.png
-        ├── ...
-        └── asd932_.png
-```
-
-If the annotation file is specified, the dataset will be generated by the first two ways, otherwise, try the third way.
-
-### ImageNet Dataset and CIFAR10 Dataset
-
-**ImageNet Dataset** `mmagic.datasets.ImageNet` and **CIFAR10 Dataset**`mmagic.datasets.CIFAR10` are datasets specific designed for ImageNet and CIFAR10 datasets. Both two datasets are encapsulation of `BasicConditionalDataset`. You can used them to load data from ImageNet dataset and CIFAR10 dataset easily.
-
-Config example for ImageNet:
-
-```python
-pipeline = [
-    dict(type='LoadImageFromFile', key='img'),
-    dict(type='RandomCropLongEdge', keys=['img']),
-    dict(type='Resize', scale=(128, 128), keys=['img'], backend='pillow'),
-    dict(type='Flip', keys=['img'], flip_ratio=0.5, direction='horizontal'),
-    dict(type='PackInputs')
-]
-
-dataset=dict(
-    type='ImageNet',
-    data_root='./data/imagenet/',
-    ann_file='meta/train.txt',
-    data_prefix='train',
-    pipeline=pipeline),
-```
-
-Config example for CIFAR10:
-
-```python
-pipeline = [dict(type='PackInputs')]
-
-dataset = dict(
-    type='CIFAR10',
-    data_root='./data',
-    data_prefix='cifar10',
-    test_mode=False,
-    pipeline=pipeline)
-```
-
-### AdobeComp1kDataset
-
-**AdobeComp1kDataset** `mmagic.datasets.AdobeComp1kDataset`
-Adobe composition-1k dataset.
-
-The dataset loads (alpha, fg, bg) data and apply specified transforms to
-the data. You could specify whether composite merged image online or load
-composited merged image in pipeline.
-
-Example for online comp-1k dataset:
-
-```md
-[
-   {
-       "alpha": 'alpha/000.png',
-       "fg": 'fg/000.png',
-       "bg": 'bg/000.png'
-   },
-   {
-       "alpha": 'alpha/001.png',
-       "fg": 'fg/001.png',
-       "bg": 'bg/001.png'
-   },
-]
-```
-
-Example for offline comp-1k dataset:
-
-```md
-[
-  {
-      "alpha": 'alpha/000.png',
-      "merged": 'merged/000.png',
-      "fg": 'fg/000.png',
-      "bg": 'bg/000.png'
-  },
-  {
-      "alpha": 'alpha/001.png',
-      "merged": 'merged/001.png',
-      "fg": 'fg/001.png',
-      "bg": 'bg/001.png'
-  },
-]
-```
-
-### GrowScaleImgDataset
-
-`GrowScaleImgDataset` is designed for dynamic GAN models (e.g., PGGAN and StyleGANv1).
-In this dataset, we support switching the data root during training to load training images of different resolutions.
-This procedure is implemented by `GrowScaleImgDataset.update_annotations` and is called by `PGGANFetchDataHook.before_train_iter` in the training process.
-
-```python
-def update_annotations(self, curr_scale):
-    # determine if the data root needs to be updated
-    if curr_scale == self._actual_curr_scale:
-        return False
-
-    # fetch new data root by resolution (scale)
-    for scale in self._img_scales:
-        if curr_scale <= scale:
-            self._curr_scale = scale
-            break
-        if scale == self._img_scales[-1]:
-            assert RuntimeError(
-                f'Cannot find a suitable scale for {curr_scale}')
-    self._actual_curr_scale = curr_scale
-    self.data_root = self.data_roots[str(self._curr_scale)]
-
-    # reload the data list with new data root
-    self.load_data_list()
-
-    # print basic dataset information to check the validity
-    print_log('Update Dataset: ' + repr(self), 'current')
-    return True
-```
-
-### SinGANDataset
-
-`SinGANDataset` is designed for SinGAN's training.
-In SinGAN's training, we do not iterate the images in the dataset but return a consistent preprocessed image dict.
-
-Therefore, we bypass the default data loading logic of `BaseDataset` because we do not need to load the corresponding image data based on the given index.
-
-```python
-def load_data_list(self, min_size, max_size, scale_factor_init):
-    # load single image
-    real = mmcv.imread(self.data_root)
-    self.reals, self.scale_factor, self.stop_scale = create_real_pyramid(
-        real, min_size, max_size, scale_factor_init)
-
-    self.data_dict = {}
-
-    # generate multi scale image
-    for i, real in enumerate(self.reals):
-        self.data_dict[f'real_scale{i}'] = real
-
-    self.data_dict['input_sample'] = np.zeros_like(
-        self.data_dict['real_scale0']).astype(np.float32)
-
-def __getitem__(self, index):
-    # directly return the transformed data dict
-    return self.pipeline(self.data_dict)
-```
-
-### PairedImageDataset
-
-`PairedImageDataset` is designed for translation models that needs paired training data (e.g., Pix2Pix).
-The directory structure is shown below. Each image files are the concatenation of the image pair.
-
-```
-./data/dataset_name/
-├── test
-│   └── XXX.jpg
-└── train
-    └── XXX.jpg
-```
-
-In `PairedImageDataset`, we scan the file list in `load_data_list` and save path in `pair_path` field to fit the `LoadPairedImageFromFile` transformation.
-
-```python
-def load_data_list(self):
-    data_infos = []
-    pair_paths = sorted(self.scan_folder(self.data_root))
-    for pair_path in pair_paths:
-        # save path in the specific field
-        data_infos.append(dict(pair_path=pair_path))
-
-    return data_infos
-```
-
-### UnpairedImageDataset
-
-`UnpairedImageDataset` is designed for translation models that do not need paired data (e.g., CycleGAN). The directory structure is shown below.
-
-```
-./data/dataset_name/
-├── testA
-│   └── XXX.jpg
-├── testB
-│   └── XXX.jpg
-├── trainA
-│   └── XXX.jpg
-└── trainB
-    └── XXX.jpg
-
-```
-
-In this dataset, we overwrite `__getitem__` function to load random image pair in the training process.
-
-```python
-def __getitem__(self, idx):
-    if not self.test_mode:
-        return self.prepare_train_data(idx)
-
-    return self.prepare_test_data(idx)
-
-def prepare_train_data(self, idx):
-    img_a_path = self.data_infos_a[idx % self.len_a]['path']
-    idx_b = np.random.randint(0, self.len_b)
-    img_b_path = self.data_infos_b[idx_b]['path']
-    results = dict()
-    results[f'img_{self.domain_a}_path'] = img_a_path
-    results[f'img_{self.domain_b}_path'] = img_b_path
-    return self.pipeline(results)
-
-def prepare_test_data(self, idx):
-    img_a_path = self.data_infos_a[idx % self.len_a]['path']
-    img_b_path = self.data_infos_b[idx % self.len_b]['path']
-    results = dict()
-    results[f'img_{self.domain_a}_path'] = img_a_path
-    results[f'img_{self.domain_b}_path'] = img_b_path
-    return self.pipeline(results)
-```
-
-## Design a new dataset
-
-If you want to create a dataset for a new low level CV task (e.g. denoise, derain, defog, and de-reflection) or existing dataset format doesn't meet your need, you can reorganize new data formats to existing format.
-
-Or create a new dataset in `mmagic/datasets` to load the data.
-
-Inheriting from the base class of datasets such as `BasicImageDataset` and `BasicFramesDataset` will make it easier to create a new dataset.
-
-And you can create a new dataset inherited from [BaseDataset](https://github.com/open-mmlab/mmengine/blob/main/mmengine/dataset/base_dataset.py) which is the base class of datasets in [MMEngine](https://github.com/open-mmlab/mmengine).
-
-Here is an example of creating a dataset for video frame interpolation:
-
-```python
-from .basic_frames_dataset import BasicFramesDataset
-from mmagic.registry import DATASETS
-
-
-@DATASETS.register_module()
-class NewVFIDataset(BasicFramesDataset):
-    """Introduce the dataset
-
-    Examples of file structure.
-
-    Args:
-        pipeline (list[dict | callable]): A sequence of data transformations.
-        folder (str | :obj:`Path`): Path to the folder.
-        ann_file (str | :obj:`Path`): Path to the annotation file.
-        test_mode (bool): Store `True` when building test dataset.
-            Default: `False`.
-    """
-
-    def __init__(self, ann_file, metainfo, data_root, data_prefix,
-                    pipeline, test_mode=False):
-        super().__init__(ann_file, metainfo, data_root, data_prefix,
-                            pipeline, test_mode)
-        self.data_infos = self.load_annotations()
-
-    def load_annotations(self):
-        """Load annoations for the dataset.
-
-        Returns:
-            list[dict]: A list of dicts for paired paths and other information.
-        """
-        data_infos = []
-        ...
-        return data_infos
-
-```
-
-Welcome to [submit new dataset classes to MMagic](https://github.com/open-mmlab/mmagic/compare).
-
-### Repeat dataset
-
-We use [RepeatDataset](https://github.com/open-mmlab/mmengine/blob/main/mmengine/dataset/dataset_wrapper.py) as wrapper to repeat the dataset.
-For example, suppose the original dataset is Dataset_A, to repeat it, the config looks like the following
-
-```python
-dataset_A_train = dict(
-        type='RepeatDataset',
-        times=N,
-        dataset=dict(  # This is the original config of Dataset_A
-            type='Dataset_A',
-            ...
-            pipeline=train_pipeline
-        )
-    )
-```
-
-You may refer to [tutorial in MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/basedataset.md).
diff --git a/docs/en/howto/losses.md b/docs/en/howto/losses.md
deleted file mode 100644
index a4c97d5704..0000000000
--- a/docs/en/howto/losses.md
+++ /dev/null
@@ -1,528 +0,0 @@
-# How to design your own loss functions
-
-`losses` are registered as `LOSSES` in `MMagic`.
-Customizing losses is similar to customizing any other model.
-This section is mainly for clarifying the design of loss modules in MMagic.
-Importantly, when writing your own loss modules, you should follow the same design,
-so that the new loss module can be adopted in our framework without extra effort.
-
-This guides includes:
-
-- [How to design your own loss functions](#how-to-design-your-own-loss-functions)
-  - [Introduction to supported losses](#introduction-to-supported-losses)
-  - [Design a new loss function](#design-a-new-loss-function)
-    - [An example of MSELoss](#an-example-of-mseloss)
-    - [An example of DiscShiftLoss](#an-example-of-discshiftloss)
-    - [An example of GANWithCustomizedLoss](#an-example-of-ganwithcustomizedloss)
-  - [Available losses](#available-losses)
-    - [regular losses](#regular-losses)
-    - [losses components](#losses-components)
-
-## Introduction to supported losses
-
-For convenient usage, you can directly use default loss calculation process we set for concrete algorithms like lsgan, biggan, styleganv2 etc.
-Take `stylegan2` as an example, we use R1 gradient penalty and generator path length regularization as configurable losses, and users can adjust
-related arguments like `r1_loss_weight` and `g_reg_weight`.
-
-```python
-# stylegan2_base.py
-loss_config = dict(
-    r1_loss_weight=10. / 2. * d_reg_interval,
-    r1_interval=d_reg_interval,
-    norm_mode='HWC',
-    g_reg_interval=g_reg_interval,
-    g_reg_weight=2. * g_reg_interval,
-    pl_batch_shrink=2)
-
-model = dict(
-    type='StyleGAN2',
-    xxx,
-    loss_config=loss_config)
-```
-
-## Design a new loss function
-
-### An example of MSELoss
-
-In general, to implement a loss module, we will write a function implementation and then wrap it with a class implementation. Take the MSELoss as an example:
-
-```python
-@masked_loss
-def mse_loss(pred, target):
-    return F.mse_loss(pred, target, reduction='none')
-
-@LOSSES.register_module()
-class MSELoss(nn.Module):
-
-    def __init__(self, loss_weight=1.0, reduction='mean', sample_wise=False):
-        # codes can be found in ``mmagic/models/losses/pixelwise_loss.py``
-
-    def forward(self, pred, target, weight=None, **kwargs):
-        # codes can be found in ``mmagic/models/losses/pixelwise_loss.py``
-```
-
-Given the definition of the loss, we can now use the loss by simply defining it in the configuration file:
-
-```python
-pixel_loss=dict(type='MSELoss', loss_weight=1.0, reduction='mean')
-```
-
-Note that `pixel_loss` above must be defined in the model. Please refer to `customize_models` for more details. Similar to model customization, in order to use your customized loss, you need to import the loss in `mmagic/models/losses/__init__.py` after writing it.
-
-### An example of DiscShiftLoss
-
-In general, to implement a loss module, we will write a function implementation and then wrap it with a class implementation.
-However, in `MMagic`, we provide another unified interface `data_info` for users to define the mapping between the input argument and data items.
-
-```python
-@weighted_loss
-def disc_shift_loss(pred):
-    return pred**2
-
-@MODULES.register_module()
-class DiscShiftLoss(nn.Module):
-
-    def __init__(self, loss_weight=1.0, data_info=None):
-        super(DiscShiftLoss, self).__init__()
-        # codes can be found in ``mmagic/models/losses/disc_auxiliary_loss.py``
-
-    def forward(self, *args, **kwargs):
-        # codes can be found in ``mmagic/models/losses/disc_auxiliary_loss.py``
-```
-
-The goal of this design for loss modules is to allow for using it automatically in the generative models (`MODELS`), without other complex codes to define the mapping between data and keyword arguments. Thus, different from other frameworks in `OpenMMLab`, our loss modules contain a special keyword, `data_info`, which is a dictionary defining the mapping between the input arguments and data from the generative models. Taking the `DiscShiftLoss` as an example, when writing the config file, users may use this loss as follows:
-
-```python
-dict(type='DiscShiftLoss',
-    loss_weight=0.001 * 0.5,
-    data_info=dict(pred='disc_pred_real')
-```
-
-The information in `data_info` tells the module to use the `disc_pred_real` data as the input tensor for `pred` arguments. Once the `data_info` is not `None`, our loss module will automatically build up the computational graph.
-
-```python
-@MODULES.register_module()
-class DiscShiftLoss(nn.Module):
-
-    def __init__(self, loss_weight=1.0, data_info=None):
-        super(DiscShiftLoss, self).__init__()
-        self.loss_weight = loss_weight
-        self.data_info = data_info
-
-    def forward(self, *args, **kwargs):
-        # use data_info to build computational path
-        if self.data_info is not None:
-            # parse the args and kwargs
-            if len(args) == 1:
-                assert isinstance(args[0], dict), (
-                    'You should offer a dictionary containing network outputs '
-                    'for building up computational graph of this loss module.')
-                outputs_dict = args[0]
-            elif 'outputs_dict' in kwargs:
-                assert len(args) == 0, (
-                    'If the outputs dict is given in keyworded arguments, no'
-                    ' further non-keyworded arguments should be offered.')
-                outputs_dict = kwargs.pop('outputs_dict')
-            else:
-                raise NotImplementedError(
-                    'Cannot parsing your arguments passed to this loss module.'
-                    ' Please check the usage of this module')
-            # link the outputs with loss input args according to self.data_info
-            loss_input_dict = {
-                k: outputs_dict[v]
-                for k, v in self.data_info.items()
-            }
-            kwargs.update(loss_input_dict)
-            kwargs.update(dict(weight=self.loss_weight))
-            return disc_shift_loss(**kwargs)
-        else:
-            # if you have not define how to build computational graph, this
-            # module will just directly return the loss as usual.
-            return disc_shift_loss(*args, weight=self.loss_weight, **kwargs)
-
-    @staticmethod
-    def loss_name():
-        return 'loss_disc_shift'
-
-```
-
-As shown in this part of codes, once users set the `data_info`, the loss module will receive a dictionary containing all of the necessary data and modules, which is provided by the `MODELS` in the training procedure. If this dictionary is given as a non-keyword argument, it should be offered as the first argument. If you are using a keyword argument, please name it as `outputs_dict`.
-
-### An example of GANWithCustomizedLoss
-
-To build the computational graph, the generative models have to provide a dictionary containing all kinds of data. Having a close look at any generative model, you will find that we collect all kinds of features and modules into a dictionary. We provide a customized `GANWithCustomizedLoss` here to show the process.
-
-```python
-class GANWithCustomizedLoss(BaseModel):
-
-    def __init__(self, gan_loss, disc_auxiliary_loss, gen_auxiliary_loss,
-                 *args, **kwargs):
-        # ...
-        if gan_loss is not None:
-            self.gan_loss = MODULES.build(gan_loss)
-        else:
-            self.gan_loss = None
-
-        if disc_auxiliary_loss:
-            self.disc_auxiliary_losses = MODULES.build(disc_auxiliary_loss)
-            if not isinstance(self.disc_auxiliary_losses, nn.ModuleList):
-                self.disc_auxiliary_losses = nn.ModuleList(
-                    [self.disc_auxiliary_losses])
-        else:
-            self.disc_auxiliary_loss = None
-
-        if gen_auxiliary_loss:
-            self.gen_auxiliary_losses = MODULES.build(gen_auxiliary_loss)
-            if not isinstance(self.gen_auxiliary_losses, nn.ModuleList):
-                self.gen_auxiliary_losses = nn.ModuleList(
-                    [self.gen_auxiliary_losses])
-        else:
-            self.gen_auxiliary_losses = None
-
-    def train_step(self, data: dict,
-                   optim_wrapper: OptimWrapperDict) -> Dict[str, Tensor]:
-        # ...
-
-        # get data dict to compute losses for disc
-        data_dict_ = dict(
-            iteration=curr_iter,
-            gen=self.generator,
-            disc=self.discriminator,
-            disc_pred_fake=disc_pred_fake,
-            disc_pred_real=disc_pred_real,
-            fake_imgs=fake_imgs,
-            real_imgs=real_imgs)
-
-        loss_disc, log_vars_disc = self._get_disc_loss(data_dict_)
-
-        # ...
-
-    def _get_disc_loss(self, outputs_dict):
-        # Construct losses dict. If you hope some items to be included in the
-        # computational graph, you have to add 'loss' in its name. Otherwise,
-        # items without 'loss' in their name will just be used to print
-        # information.
-        losses_dict = {}
-        # gan loss
-        losses_dict['loss_disc_fake'] = self.gan_loss(
-            outputs_dict['disc_pred_fake'], target_is_real=False, is_disc=True)
-        losses_dict['loss_disc_real'] = self.gan_loss(
-            outputs_dict['disc_pred_real'], target_is_real=True, is_disc=True)
-
-        # disc auxiliary loss
-        if self.with_disc_auxiliary_loss:
-            for loss_module in self.disc_auxiliary_losses:
-                loss_ = loss_module(outputs_dict)
-                if loss_ is None:
-                    continue
-
-                # the `loss_name()` function return name as 'loss_xxx'
-                if loss_module.loss_name() in losses_dict:
-                    losses_dict[loss_module.loss_name(
-                    )] = losses_dict[loss_module.loss_name()] + loss_
-                else:
-                    losses_dict[loss_module.loss_name()] = loss_
-        loss, log_var = self.parse_losses(losses_dict)
-
-        return loss, log_var
-
-```
-
-Here, the `_get_disc_loss` will help to combine all kinds of losses automatically.
-
-Therefore, as long as users design the loss module with the same rules, any kind of loss can be inserted in the training of generative models,
-without other modifications in the code of models. What you only need to do is just defining the `data_info` in the config files.
-
-## Available losses
-
-We list available losses with examples in configs as follows.
-
-### regular losses
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th>Method</th>
-    <th>class</th>
-    <th>Example</th>
-  </tr>
-</thead>
-<tbody>
-  <tr>
-    <td>vanilla gan loss</td>
-    <td>mmagic.models.GANLoss</td>
-<td>
-
-```python
-# dic gan
-loss_gan=dict(
-    type='GANLoss',
-    gan_type='vanilla',
-    loss_weight=0.001,
-)
-
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>lsgan loss</td>
-    <td>mmagic.models.GANLoss</td>
-<td>
-</td>
-
-</tr>
-  <tr>
-    <td>wgan loss</td>
-    <td>mmagic.models.GANLoss</td>
-    <td>
-
-```python
-# deepfillv1
-loss_gan=dict(
-    type='GANLoss',
-    gan_type='wgan',
-    loss_weight=0.0001,
-)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>hinge loss</td>
-    <td>mmagic.models.GANLoss</td>
-    <td>
-
-```python
-# deepfillv2
-loss_gan=dict(
-    type='GANLoss',
-    gan_type='hinge',
-    loss_weight=0.1,
-)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>smgan loss</td>
-    <td>mmagic.models.GANLoss</td>
-<td>
-
-```python
-# aot-gan
-loss_gan=dict(
-    type='GANLoss',
-    gan_type='smgan',
-    loss_weight=0.01,
-)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>gradient penalty</td>
-    <td>mmagic.models.GradientPenaltyLoss</td>
-    <td>
-
-```python
-# deepfillv1
-loss_gp=dict(type='GradientPenaltyLoss', loss_weight=10.)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>discriminator shift loss</td>
-    <td>mmagic.models.DiscShiftLoss</td>
-    <td>
-
-```python
-# deepfillv1
-loss_disc_shift=dict(type='DiscShiftLoss', loss_weight=0.001)
-
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>clip loss</td>
-    <td>mmagic.models.CLIPLoss</td>
-    <td></td>
-
-</tr>
-  <tr>
-    <td>L1 composition loss</td>
-    <td>mmagic.models.L1CompositionLoss</td>
-    <td></td>
-
-</tr>
-  <tr>
-    <td>MSE composition loss</td>
-    <td>mmagic.models.MSECompositionLoss</td>
-    <td></td>
-
-</tr>
-  <tr>
-    <td>charbonnier composition loss</td>
-    <td>mmagic.models.CharbonnierCompLoss</td>
-    <td>
-
-```python
-# dim
-loss_comp=dict(type='CharbonnierCompLoss', loss_weight=0.5)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>face id Loss</td>
-    <td>mmagic.models.FaceIdLoss</td>
-    <td></td>
-
-</tr>
-  <tr>
-    <td>light cnn feature loss</td>
-    <td>mmagic.models.LightCNNFeatureLoss</td>
-    <td>
-
-```python
-# dic gan
-feature_loss=dict(
-    type='LightCNNFeatureLoss',
-    pretrained=pretrained_light_cnn,
-    loss_weight=0.1,
-    criterion='l1')
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>gradient loss</td>
-    <td>mmagic.models.GradientLoss</td>
-    <td></td>
-
-</tr>
-  <tr>
-    <td>l1 Loss</td>
-    <td>mmagic.models.L1Loss</td>
-    <td>
-
-```python
-# dic gan
-pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean')
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>mse loss</td>
-    <td>mmagic.models.MSELoss</td>
-    <td>
-
-```python
-# dic gan
-align_loss=dict(type='MSELoss', loss_weight=0.1, reduction='mean')
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>charbonnier loss</td>
-    <td>mmagic.models.CharbonnierLoss</td>
-    <td>
-
-```python
-# dim
-loss_alpha=dict(type='CharbonnierLoss', loss_weight=0.5)
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>masked total variation loss</td>
-    <td>mmagic.models.MaskedTVLoss</td>
-    <td>
-
-```python
-# partial conv
-loss_tv=dict(
-    type='MaskedTVLoss',
-    loss_weight=0.1
-)
-
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>perceptual loss</td>
-    <td>mmagic.models.PerceptualLoss</td>
-    <td>
-
-```python
-# real_basicvsr
-perceptual_loss=dict(
-    type='PerceptualLoss',
-    layer_weights={
-        '2': 0.1,
-        '7': 0.1,
-        '16': 1.0,
-        '25': 1.0,
-        '34': 1.0,
-    },
-    vgg_type='vgg19',
-    perceptual_weight=1.0,
-    style_weight=0,
-    norm_img=False)
-
-```
-
-</td>
-
-</tr>
-  <tr>
-    <td>transferal perceptual loss</td>
-    <td>mmagic.models.TransferalPerceptualLoss</td>
-    <td>
-
-```python
-# ttsr
-transferal_perceptual_loss=dict(
-    type='TransferalPerceptualLoss',
-    loss_weight=1e-2,
-    use_attention=False,
-    criterion='mse')
-
-```
-
-</td>
-  </tr>
-</tbody>
-</table>
-
-### losses components
-
-For `GANWithCustomizedLoss`, we provide several components to build customized loss.
-
-| Method                               | class                                       |
-| ------------------------------------ | ------------------------------------------- |
-| clip loss component                  | mmagic.models.CLIPLossComps                 |
-| discriminator shift loss component   | mmagic.models. DiscShiftLossComps           |
-| gradient penalty loss component      | mmagic.models. GradientPenaltyLossComps     |
-| r1 gradient penalty component        | mmagic.models. R1GradientPenaltyComps       |
-| face Id loss component               | mmagic.models. FaceIdLossComps              |
-| gan loss component                   | mmagic.models. GANLossComps                 |
-| generator path regularizer component | mmagic.models.GeneratorPathRegularizerComps |
diff --git a/docs/en/howto/models.md b/docs/en/howto/models.md
deleted file mode 100644
index b135b96990..0000000000
--- a/docs/en/howto/models.md
+++ /dev/null
@@ -1,748 +0,0 @@
-# How to design your own models
-
-MMagic is built upon MMEngine and MMCV, which enables users to design new models quickly, train and evaluate them easily.
-In this section, you will learn how to design your own models.
-
-The structure of this guide are as follows:
-
-- [How to design your own models](#how-to-design-your-own-models)
-  - [Overview of models in MMagic](#overview-of-models-in-mmagic)
-  - [An example of SRCNN](#an-example-of-srcnn)
-    - [Step 1: Define the network of SRCNN](#step-1-define-the-network-of-srcnn)
-    - [Step 2: Define the model of SRCNN](#step-2-define-the-model-of-srcnn)
-    - [Step 3: Start training SRCNN](#step-3-start-training-srcnn)
-  - [An example of DCGAN](#an-example-of-dcgan)
-    - [Step 1: Define the network of DCGAN](#step-1-define-the-network-of-dcgan)
-    - [Step 2: Design the model of DCGAN](#step-2-design-the-model-of-dcgan)
-    - [Step 3: Start training DCGAN](#step-3-start-training-dcgan)
-  - [References](#references)
-
-## Overview of models in MMagic
-
-In MMagic, one algorithm can be splited two compents: **Model** and **Module**.
-
-- **Model** are topmost wrappers and always inherint from `BaseModel` provided in MMEngine. **Model** is responsible to network forward, loss calculation and backward, parameters updating, etc. In MMagic, **Model** should be registered as `MODELS`.
-- **Module** includes the neural network **architectures** to train or inference, pre-defined **loss classes**, and **data preprocessors** to preprocess the input data batch. **Module** always present as elements of **Model**. In MMagic, **Module** should be registered as **MODULES**.
-
-Take DCGAN model as an example, [generator](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/editors/dcgan/dcgan_generator.py) and [discriminator](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/editors/dcgan/dcgan_discriminator.py) are the **Module**, which generate images and discriminate real or fake images. [`DCGAN`](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/editors/dcgan/dcgan.py) is the **Model**, which take data from dataloader and train generator and discriminator alternatively.
-
-You can find the implementation of **Model** and **Module** by the following link.
-
-- **Model**:
-  - [Editors](https://github.com/open-mmlab/mmagic/tree/main/mmagic/models/editors)
-- **Module**:
-  - [Layers](https://github.com/open-mmlab/mmagic/tree/main/mmagic/models/layers)
-  - [Losses](https://github.com/open-mmlab/mmagic/tree/main/mmagic/models/losses)
-  - [Data Preprocessor](https://github.com/open-mmlab/mmagic/tree/main/mmagic/models/data_preprocessors)
-
-## An example of SRCNN
-
-Here, we take the implementation of the classical image super-resolution model, SRCNN \[1\], as an example.
-
-### Step 1: Define the network of SRCNN
-
-SRCNN is the first deep learning method for single image super-resolution \[1\].
-To implement the network architecture of SRCNN,
-we need to create a new file `mmagic/models/editors/srgan/sr_resnet.py` and implement `class MSRResNet`.
-
-In this step, we implement `class MSRResNet` by inheriting from `mmengine.models.BaseModule` and define the network architecture in `__init__` function.
-In particular, we need to use `@MODELS.register_module()` to add the implementation of `class MSRResNet` into the registration of MMagic.
-
-```python
-import torch.nn as nn
-from mmengine.model import BaseModule
-from mmagic.registry import MODELS
-
-from mmagic.models.utils import (PixelShufflePack, ResidualBlockNoBN,
-                                 default_init_weights, make_layer)
-
-
-@MODELS.register_module()
-class MSRResNet(BaseModule):
-    """Modified SRResNet.
-
-    A compacted version modified from SRResNet in "Photo-Realistic Single
-    Image Super-Resolution Using a Generative Adversarial Network".
-
-    It uses residual blocks without BN, similar to EDSR.
-    Currently, it supports x2, x3 and x4 upsampling scale factor.
-
-    Args:
-        in_channels (int): Channel number of inputs.
-        out_channels (int): Channel number of outputs.
-        mid_channels (int): Channel number of intermediate features.
-            Default: 64.
-        num_blocks (int): Block number in the trunk network. Default: 16.
-        upscale_factor (int): Upsampling factor. Support x2, x3 and x4.
-            Default: 4.
-    """
-    _supported_upscale_factors = [2, 3, 4]
-
-    def __init__(self,
-                 in_channels,
-                 out_channels,
-                 mid_channels=64,
-                 num_blocks=16,
-                 upscale_factor=4):
-
-        super().__init__()
-        self.in_channels = in_channels
-        self.out_channels = out_channels
-        self.mid_channels = mid_channels
-        self.num_blocks = num_blocks
-        self.upscale_factor = upscale_factor
-
-        self.conv_first = nn.Conv2d(
-            in_channels, mid_channels, 3, 1, 1, bias=True)
-        self.trunk_net = make_layer(
-            ResidualBlockNoBN, num_blocks, mid_channels=mid_channels)
-
-        # upsampling
-        if self.upscale_factor in [2, 3]:
-            self.upsample1 = PixelShufflePack(
-                mid_channels,
-                mid_channels,
-                self.upscale_factor,
-                upsample_kernel=3)
-        elif self.upscale_factor == 4:
-            self.upsample1 = PixelShufflePack(
-                mid_channels, mid_channels, 2, upsample_kernel=3)
-            self.upsample2 = PixelShufflePack(
-                mid_channels, mid_channels, 2, upsample_kernel=3)
-        else:
-            raise ValueError(
-                f'Unsupported scale factor {self.upscale_factor}. '
-                f'Currently supported ones are '
-                f'{self._supported_upscale_factors}.')
-
-        self.conv_hr = nn.Conv2d(
-            mid_channels, mid_channels, 3, 1, 1, bias=True)
-        self.conv_last = nn.Conv2d(
-            mid_channels, out_channels, 3, 1, 1, bias=True)
-
-        self.img_upsampler = nn.Upsample(
-            scale_factor=self.upscale_factor,
-            mode='bilinear',
-            align_corners=False)
-
-        # activation function
-        self.lrelu = nn.LeakyReLU(negative_slope=0.1, inplace=True)
-
-        self.init_weights()
-
-    def init_weights(self):
-        """Init weights for models.
-
-        Args:
-            pretrained (str, optional): Path for pretrained weights. If given
-                None, pretrained weights will not be loaded. Defaults to None.
-            strict (boo, optional): Whether strictly load the pretrained model.
-                Defaults to True.
-        """
-
-        for m in [self.conv_first, self.conv_hr, self.conv_last]:
-            default_init_weights(m, 0.1)
-
-```
-
-Then, we implement the `forward` function of  `class MSRResNet`, which takes as input tensor and then returns the results from `MSRResNet`.
-
-```python
-    def forward(self, x):
-        """Forward function.
-
-        Args:
-            x (Tensor): Input tensor with shape (n, c, h, w).
-
-        Returns:
-            Tensor: Forward results.
-        """
-
-        feat = self.lrelu(self.conv_first(x))
-        out = self.trunk_net(feat)
-
-        if self.upscale_factor in [2, 3]:
-            out = self.upsample1(out)
-        elif self.upscale_factor == 4:
-            out = self.upsample1(out)
-            out = self.upsample2(out)
-
-        out = self.conv_last(self.lrelu(self.conv_hr(out)))
-        upsampled_img = self.img_upsampler(x)
-        out += upsampled_img
-        return out
-```
-
-After the implementation of `class MSRResNet`, we need to update the model list in `mmagic/models/editors/__init__.py`, so that we can import and use `class MSRResNet` by `mmagic.models.editors`.
-
-```python
-from .srgan.sr_resnet import MSRResNet
-```
-
-### Step 2: Define the model of SRCNN
-
-After the implementation of the network architecture,
-we need to define our model `class BaseEditModel` and implement the forward loop of `class BaseEditModel`.
-
-To implement `class BaseEditModel`,
-we create a new file `mmagic/models/base_models/base_edit_model.py`.
-Specifically, `class BaseEditModel` inherits from `mmengine.model.BaseModel`.
-In the `__init__` function, we define the loss functions, training and testing configurations, networks of `class BaseEditModel`.
-
-```python
-from typing import List, Optional
-
-import torch
-from mmengine.model import BaseModel
-
-from mmagic.registry import MODELS
-from mmagic.structures import DataSample
-
-
-@MODELS.register_module()
-class BaseEditModel(BaseModel):
-    """Base model for image and video editing.
-
-    It must contain a generator that takes frames as inputs and outputs an
-    interpolated frame. It also has a pixel-wise loss for training.
-
-    Args:
-        generator (dict): Config for the generator structure.
-        pixel_loss (dict): Config for pixel-wise loss.
-        train_cfg (dict): Config for training. Default: None.
-        test_cfg (dict): Config for testing. Default: None.
-        init_cfg (dict, optional): The weight initialized config for
-            :class:`BaseModule`.
-        data_preprocessor (dict, optional): The pre-process config of
-            :class:`BaseDataPreprocessor`.
-
-    Attributes:
-        init_cfg (dict, optional): Initialization config dict.
-        data_preprocessor (:obj:`BaseDataPreprocessor`): Used for
-            pre-processing data sampled by dataloader to the format accepted by
-            :meth:`forward`. Default: None.
-    """
-
-    def __init__(self,
-                 generator,
-                 pixel_loss,
-                 train_cfg=None,
-                 test_cfg=None,
-                 init_cfg=None,
-                 data_preprocessor=None):
-        super().__init__(
-            init_cfg=init_cfg, data_preprocessor=data_preprocessor)
-
-        self.train_cfg = train_cfg
-        self.test_cfg = test_cfg
-
-        # generator
-        self.generator = MODELS.build(generator)
-
-        # loss
-        self.pixel_loss = MODELS.build(pixel_loss)
-```
-
-Since `mmengine.model.BaseModel` provides the basic functions of the algorithmic model,
-such as weights initialize, batch inputs preprocess, parse losses, and update model parameters.
-Therefore, the subclasses inherit from BaseModel, i.e., `class BaseEditModel` in this example,
-only need to implement the forward method,
-which implements the logic to calculate loss and predictions.
-
-Specifically, the implemented `forward` function of `class BaseEditModel` takes as input `batch_inputs` and `data_samples` and return results according to mode arguments.
-
-```python
-    def forward(self,
-                batch_inputs: torch.Tensor,
-                data_samples: Optional[List[DataSample]] = None,
-                mode: str = 'tensor',
-                **kwargs):
-        """Returns losses or predictions of training, validation, testing, and
-        simple inference process.
-
-        ``forward`` method of BaseModel is an abstract method, its subclasses
-        must implement this method.
-
-        Accepts ``batch_inputs`` and ``data_samples`` processed by
-        :attr:`data_preprocessor`, and returns results according to mode
-        arguments.
-
-        During non-distributed training, validation, and testing process,
-        ``forward`` will be called by ``BaseModel.train_step``,
-        ``BaseModel.val_step`` and ``BaseModel.val_step`` directly.
-
-        During distributed data parallel training process,
-        ``MMSeparateDistributedDataParallel.train_step`` will first call
-        ``DistributedDataParallel.forward`` to enable automatic
-        gradient synchronization, and then call ``forward`` to get training
-        loss.
-
-        Args:
-            batch_inputs (torch.Tensor): batch input tensor collated by
-                :attr:`data_preprocessor`.
-            data_samples (List[BaseDataElement], optional):
-                data samples collated by :attr:`data_preprocessor`.
-            mode (str): mode should be one of ``loss``, ``predict`` and
-                ``tensor``
-
-                - ``loss``: Called by ``train_step`` and return loss ``dict``
-                  used for logging
-                - ``predict``: Called by ``val_step`` and ``test_step``
-                  and return list of ``BaseDataElement`` results used for
-                  computing metric.
-                - ``tensor``: Called by custom use to get ``Tensor`` type
-                  results.
-
-        Returns:
-            ForwardResults:
-
-                - If ``mode == loss``, return a ``dict`` of loss tensor used
-                  for backward and logging.
-                - If ``mode == predict``, return a ``list`` of
-                  :obj:`BaseDataElement` for computing metric
-                  and getting inference result.
-                - If ``mode == tensor``, return a tensor or ``tuple`` of tensor
-                  or ``dict or tensor for custom use.
-        """
-
-        if mode == 'tensor':
-            return self.forward_tensor(batch_inputs, data_samples, **kwargs)
-
-        elif mode == 'predict':
-            return self.forward_inference(batch_inputs, data_samples, **kwargs)
-
-        elif mode == 'loss':
-            return self.forward_train(batch_inputs, data_samples, **kwargs)
-```
-
-Specifically, in `forward_tensor`, `class BaseEditModel` returns the forward tensors of the network directly.
-
-```python
-    def forward_tensor(self, batch_inputs, data_samples=None, **kwargs):
-        """Forward tensor.
-            Returns result of simple forward.
-
-        Args:
-            batch_inputs (torch.Tensor): batch input tensor collated by
-                :attr:`data_preprocessor`.
-            data_samples (List[BaseDataElement], optional):
-                data samples collated by :attr:`data_preprocessor`.
-
-        Returns:
-            Tensor: result of simple forward.
-        """
-
-        feats = self.generator(batch_inputs, **kwargs)
-
-        return feats
-```
-
-In `forward_inference` function, `class BaseEditModel` first converts the forward tensors to images and then returns the images as output.
-
-```python
-    def forward_inference(self, batch_inputs, data_samples=None, **kwargs):
-        """Forward inference.
-            Returns predictions of validation, testing, and simple inference.
-
-        Args:
-            batch_inputs (torch.Tensor): batch input tensor collated by
-                :attr:`data_preprocessor`.
-            data_samples (List[BaseDataElement], optional):
-                data samples collated by :attr:`data_preprocessor`.
-
-        Returns:
-            List[DataSample]: predictions.
-        """
-
-        feats = self.forward_tensor(batch_inputs, data_samples, **kwargs)
-        feats = self.data_preprocessor.destructor(feats)
-        predictions = []
-        for idx in range(feats.shape[0]):
-            predictions.append(
-                DataSample(
-                    pred_img=feats[idx].to('cpu'),
-                    metainfo=data_samples[idx].metainfo))
-
-        return predictions
-```
-
-In `forward_train`, `class BaseEditModel` calculate the loss function and returns a dictionary contains the losses as output.
-
-```python
-    def forward_train(self, batch_inputs, data_samples=None, **kwargs):
-        """Forward training.
-            Returns dict of losses of training.
-
-        Args:
-            batch_inputs (torch.Tensor): batch input tensor collated by
-                :attr:`data_preprocessor`.
-            data_samples (List[BaseDataElement], optional):
-                data samples collated by :attr:`data_preprocessor`.
-
-        Returns:
-            dict: Dict of losses.
-        """
-
-        feats = self.forward_tensor(batch_inputs, data_samples, **kwargs)
-        gt_imgs = [data_sample.gt_img.data for data_sample in data_samples]
-        batch_gt_data = torch.stack(gt_imgs)
-
-        loss = self.pixel_loss(feats, batch_gt_data)
-
-        return dict(loss=loss)
-
-```
-
-After the implementation of `class BaseEditModel`,
-we need to update the model list in `mmagic/models/__init__.py`,
-so that we can import and use `class BaseEditModel` by `mmagic.models`.
-
-```python
-from .base_models.base_edit_model import BaseEditModel
-```
-
-### Step 3: Start training SRCNN
-
-After implementing the network architecture and the forward loop of SRCNN,
-now we can create a new file `configs/srcnn/srcnn_x4k915_g1_1000k_div2k.py`
-to set the configurations needed by training SRCNN.
-
-In the configuration file, we need to specify the parameters of our model, `class BaseEditModel`, including the generator network architecture, loss function, additional training and testing configuration, and data preprocessor of input tensors. Please refer to the [Introduction to the loss in MMagic](./losses.md) for more details of losses in MMagic.
-
-```python
-# model settings
-model = dict(
-    type='BaseEditModel',
-    generator=dict(
-        type='SRCNNNet',
-        channels=(3, 64, 32, 3),
-        kernel_sizes=(9, 1, 5),
-        upscale_factor=scale),
-    pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean'),
-    data_preprocessor=dict(
-        type='DataPreprocessor',
-        mean=[0., 0., 0.],
-        std=[255., 255., 255.],
-    ))
-```
-
-We also need to specify the training dataloader and testing dataloader according to create your own dataloader.
-Finally we can start training our own model by:
-
-```python
-python tools/train.py configs/srcnn/srcnn_x4k915_g1_1000k_div2k.py
-```
-
-## An example of DCGAN
-
-Here, we take the implementation of the classical gan model, DCGAN \[2\], as an example.
-
-### Step 1: Define the network of DCGAN
-
-DCGAN is a classical image generative adversarial network \[2\]. To implement the network architecture of DCGAN, we need to create tow new files `mmagic/models/editors/dcgan/dcgan_generator.py` and `mmagic/models/editors/dcgan/dcgan_discriminator.py`, and implement generator (`class DCGANGenerator`) and discriminator (`class DCGANDiscriminator`).
-
-In this step, we implement `class DCGANGenerator`, `class DCGANDiscriminator` and define the network architecture in `__init__` function.
-In particular, we need to use `@MODULES.register_module()` to add the generator and discriminator into the registration of MMagic.
-
-Take the following code as example:
-
-```python
-import torch.nn as nn
-from mmcv.cnn import ConvModule
-from mmcv.runner import load_checkpoint
-from mmcv.utils.parrots_wrapper import _BatchNorm
-from mmengine.logging import MMLogger
-from mmengine.model.utils import normal_init
-
-from mmagic.models.builder import MODULES
-from ..common import get_module_device
-
-
-@MODULES.register_module()
-class DCGANGenerator(nn.Module):
-    def __init__(self,
-                 output_scale,
-                 out_channels=3,
-                 base_channels=1024,
-                 input_scale=4,
-                 noise_size=100,
-                 default_norm_cfg=dict(type='BN'),
-                 default_act_cfg=dict(type='ReLU'),
-                 out_act_cfg=dict(type='Tanh'),
-                 pretrained=None):
-        super().__init__()
-        self.output_scale = output_scale
-        self.base_channels = base_channels
-        self.input_scale = input_scale
-        self.noise_size = noise_size
-
-        # the number of times for upsampling
-        self.num_upsamples = int(np.log2(output_scale // input_scale))
-
-        # output 4x4 feature map
-        self.noise2feat = ConvModule(
-            noise_size,
-            base_channels,
-            kernel_size=4,
-            stride=1,
-            padding=0,
-            conv_cfg=dict(type='ConvTranspose2d'),
-            norm_cfg=default_norm_cfg,
-            act_cfg=default_act_cfg)
-
-        # build up upsampling backbone (excluding the output layer)
-        upsampling = []
-        curr_channel = base_channels
-        for _ in range(self.num_upsamples - 1):
-            upsampling.append(
-                ConvModule(
-                    curr_channel,
-                    curr_channel // 2,
-                    kernel_size=4,
-                    stride=2,
-                    padding=1,
-                    conv_cfg=dict(type='ConvTranspose2d'),
-                    norm_cfg=default_norm_cfg,
-                    act_cfg=default_act_cfg))
-
-            curr_channel //= 2
-
-        self.upsampling = nn.Sequential(*upsampling)
-
-        # output layer
-        self.output_layer = ConvModule(
-            curr_channel,
-            out_channels,
-            kernel_size=4,
-            stride=2,
-            padding=1,
-            conv_cfg=dict(type='ConvTranspose2d'),
-            norm_cfg=None,
-            act_cfg=out_act_cfg)
-```
-
-Then, we implement the `forward` function of  `DCGANGenerator`, which takes as `noise` tensor or `num_batches` and then returns the results from `DCGANGenerator`.
-
-```python
-    def forward(self, noise, num_batches=0, return_noise=False):
-        noise_batch = noise_batch.to(get_module_device(self))
-        x = self.noise2feat(noise_batch)
-        x = self.upsampling(x)
-        x = self.output_layer(x)
-        return x
-```
-
-If you want to implement specific weights initialization method for you network, you need add `init_weights` function by yourself.
-
-```python
-    def init_weights(self, pretrained=None):
-        if isinstance(pretrained, str):
-            logger = MMLogger.get_current_instance()
-            load_checkpoint(self, pretrained, strict=False, logger=logger)
-        elif pretrained is None:
-            for m in self.modules():
-                if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
-                    normal_init(m, 0, 0.02)
-                elif isinstance(m, _BatchNorm):
-                    nn.init.normal_(m.weight.data)
-                    nn.init.constant_(m.bias.data, 0)
-        else:
-            raise TypeError('pretrained must be a str or None but'
-                            f' got {type(pretrained)} instead.')
-```
-
-After the implementation of class `DCGANGenerator`, we need to update the model list in `mmagic/models/editors/__init__.py`, so that we can import and use class `DCGANGenerator` by `mmagic.models.editors`.
-
-Implementation of Class `DCGANDiscriminator` follows the similar logic, and you can find the implementation [here](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/editors/dcgan/dcgan_discriminator.py).
-
-### Step 2: Design the model of DCGAN
-
-After the implementation of the network **Module**, we need to define our **Model** class `DCGAN`.
-
-Your **Model** should inherit from [`BaseModel`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/base_model/base_model.py#L16) provided by MMEngine and implement three functions, `train_step`, `val_step` and `test_step`.
-
-- `train_step`: This function is responsible to update the parameters of the network and called by MMEngine's Loop ([`IterBasedTrainLoop`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L183) or [`EpochBasedTrainLoop`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L18)). `train_step` take data batch and [`OptimWrapper`](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/optim_wrapper.md) as input and return a dict of log.
-- `val_step`: This function is responsible for getting output for validation during the training process. and is called by [`MultiValLoop`](https://github.com/open-mmlab/mmagic/blob/main/mmagic/engine/runner/multi_loops.py#L19).
-- `test_step`: This function is responsible for getting output in test process and is called by [`MultiTestLoop`](https://github.com/open-mmlab/mmagic/blob/main/mmagic/engine/runner/multi_loops.py#L274).
-
-> Note that, in `train_step`, `val_step` and `test_step`, `DataPreprocessor` is called to preprocess the input data batch before feed them to the neural network. To know more about `DataPreprocessor` please refer to this [file](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/data_preprocessors/gen_preprocessor.py) and this [tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/model.md#%E6%95%B0%E6%8D%AE%E5%A4%84%E7%90%86%E5%99%A8datapreprocessor).
-
-For simplify using, we provide [`BaseGAN`](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/base_models/base_gan.py) class in MMagic, which implements generic `train_step`, `val_step` and `test_step` function for GAN models. With `BaseGAN` as base class, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
-
-In `train_step`, we support data preprocessing, gradient accumulation (realized by [`OptimWrapper`](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/optim_wrapper.md)) and expontial moving averate (EMA) realized by [(`ExponentialMovingAverage`)](https://github.com/open-mmlab/mmagic/blob/main/mmagic/models/base_models/average_model.py#L19). With `BaseGAN.train_step`, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
-
-```python
-    def train_step(self, data: dict,
-                   optim_wrapper: OptimWrapperDict) -> Dict[str, Tensor]:
-        message_hub = MessageHub.get_current_instance()
-        curr_iter = message_hub.get_info('iter')
-        data = self.data_preprocessor(data, True)
-        disc_optimizer_wrapper: OptimWrapper = optim_wrapper['discriminator']
-        disc_accu_iters = disc_optimizer_wrapper._accumulative_counts
-
-        # train discriminator, use context manager provided by MMEngine
-        with disc_optimizer_wrapper.optim_context(self.discriminator):
-            # train_discriminator should be implemented!
-            log_vars = self.train_discriminator(
-                **data, optimizer_wrapper=disc_optimizer_wrapper)
-
-        # add 1 to `curr_iter` because iter is updated in train loop.
-        # Whether to update the generator. We update generator with
-        # discriminator is fully updated for `self.n_discriminator_steps`
-        # iterations. And one full updating for discriminator contains
-        # `disc_accu_counts` times of grad accumulations.
-        if (curr_iter + 1) % (self.discriminator_steps * disc_accu_iters) == 0:
-            set_requires_grad(self.discriminator, False)
-            gen_optimizer_wrapper = optim_wrapper['generator']
-            gen_accu_iters = gen_optimizer_wrapper._accumulative_counts
-
-            log_vars_gen_list = []
-            # init optimizer wrapper status for generator manually
-            gen_optimizer_wrapper.initialize_count_status(
-                self.generator, 0, self.generator_steps * gen_accu_iters)
-            # update generator, use context manager provided by MMEngine
-            for _ in range(self.generator_steps * gen_accu_iters):
-                with gen_optimizer_wrapper.optim_context(self.generator):
-                    # train_generator should be implemented!
-                    log_vars_gen = self.train_generator(
-                        **data, optimizer_wrapper=gen_optimizer_wrapper)
-
-                log_vars_gen_list.append(log_vars_gen)
-            log_vars_gen = gather_log_vars(log_vars_gen_list)
-            log_vars_gen.pop('loss', None)  # remove 'loss' from gen logs
-
-            set_requires_grad(self.discriminator, True)
-
-            # only do ema after generator update
-            if self.with_ema_gen and (curr_iter + 1) >= (
-                    self.ema_start * self.discriminator_steps *
-                    disc_accu_iters):
-                self.generator_ema.update_parameters(
-                    self.generator.module
-                    if is_model_wrapper(self.generator) else self.generator)
-
-            log_vars.update(log_vars_gen)
-
-        # return the log dict
-        return log_vars
-```
-
-In `val_step` and `test_step`, we call data preprocessing and `BaseGAN.forward` progressively.
-
-```python
-    def val_step(self, data: dict) -> SampleList:
-        data = self.data_preprocessor(data)
-        # call `forward`
-        outputs = self(**data)
-        return outputs
-
-    def test_step(self, data: dict) -> SampleList:
-        data = self.data_preprocessor(data)
-        # call `orward`
-        outputs = self(**data)
-        return outputs
-```
-
-Then, we implement `train_generator` and `train_discriminator` in `DCGAN` class.
-
-```python
-from typing import Dict, Tuple
-
-import torch
-import torch.nn.functional as F
-from mmengine.optim import OptimWrapper
-from torch import Tensor
-
-from mmagic.registry import MODELS
-from .base_gan import BaseGAN
-
-
-@MODELS.register_module()
-class DCGAN(BaseGAN):
-    def disc_loss(self, disc_pred_fake: Tensor,
-                  disc_pred_real: Tensor) -> Tuple:
-        losses_dict = dict()
-        losses_dict['loss_disc_fake'] = F.binary_cross_entropy_with_logits(
-            disc_pred_fake, 0. * torch.ones_like(disc_pred_fake))
-        losses_dict['loss_disc_real'] = F.binary_cross_entropy_with_logits(
-            disc_pred_real, 1. * torch.ones_like(disc_pred_real))
-
-        loss, log_var = self.parse_losses(losses_dict)
-        return loss, log_var
-
-    def gen_loss(self, disc_pred_fake: Tensor) -> Tuple:
-        losses_dict = dict()
-        losses_dict['loss_gen'] = F.binary_cross_entropy_with_logits(
-            disc_pred_fake, 1. * torch.ones_like(disc_pred_fake))
-        loss, log_var = self.parse_losses(losses_dict)
-        return loss, log_var
-
-    def train_discriminator(
-            self, inputs, data_sample,
-            optimizer_wrapper: OptimWrapper) -> Dict[str, Tensor]:
-        real_imgs = inputs['img']
-
-        num_batches = real_imgs.shape[0]
-
-        noise_batch = self.noise_fn(num_batches=num_batches)
-        with torch.no_grad():
-            fake_imgs = self.generator(noise=noise_batch, return_noise=False)
-
-        disc_pred_fake = self.discriminator(fake_imgs)
-        disc_pred_real = self.discriminator(real_imgs)
-
-        parsed_losses, log_vars = self.disc_loss(disc_pred_fake,
-                                                 disc_pred_real)
-        optimizer_wrapper.update_params(parsed_losses)
-        return log_vars
-
-    def train_generator(self, inputs, data_sample,
-                        optimizer_wrapper: OptimWrapper) -> Dict[str, Tensor]:
-        num_batches = inputs['img'].shape[0]
-
-        noise = self.noise_fn(num_batches=num_batches)
-        fake_imgs = self.generator(noise=noise, return_noise=False)
-
-        disc_pred_fake = self.discriminator(fake_imgs)
-        parsed_loss, log_vars = self.gen_loss(disc_pred_fake)
-
-        optimizer_wrapper.update_params(parsed_loss)
-        return log_vars
-```
-
-After the implementation of `class DCGAN`, we need to update the model list in `mmagic/models/__init__.py`, so that we can import and use `class DCGAN` by `mmagic.models`.
-
-### Step 3: Start training DCGAN
-
-After implementing the network **Module** and the **Model** of DCGAN,
-now we can create a new file `configs/dcgan/dcgan_1xb128-5epoches_lsun-bedroom-64x64.py`
-to set the configurations needed by training DCGAN.
-
-In the configuration file, we need to specify the parameters of our model, `class DCGAN`, including the generator network architecture and data preprocessor of input tensors.
-
-```python
-# model settings
-model = dict(
-    type='DCGAN',
-    noise_size=100,
-    data_preprocessor=dict(type='GANDataPreprocessor'),
-    generator=dict(type='DCGANGenerator', output_scale=64, base_channels=1024),
-    discriminator=dict(
-        type='DCGANDiscriminator',
-        input_scale=64,
-        output_scale=4,
-        out_channels=1))
-```
-
-We also need to specify the training dataloader and testing dataloader according to [create your own dataloader](dataset.md).
-Finally we can start training our own model by:
-
-```python
-python tools/train.py configs/dcgan/dcgan_1xb128-5epoches_lsun-bedroom-64x64.py
-```
-
-## References
-
-1. Dong, Chao and Loy, Chen Change and He, Kaiming and Tang, Xiaoou. Image Super-Resolution Using Deep Convolutional Networks\[J\]. IEEE transactions on pattern analysis and machine intelligence, 2015.
-
-2. Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
diff --git a/docs/en/howto/transforms.md b/docs/en/howto/transforms.md
deleted file mode 100644
index d10e4e0a3a..0000000000
--- a/docs/en/howto/transforms.md
+++ /dev/null
@@ -1,674 +0,0 @@
-# How to design your own data transforms
-
-In this tutorial, we introduce the design of transforms pipeline in MMagic.
-
-The structure of this guide are as follows:
-
-- [How to design your own data transforms](#how-to-design-your-own-data-transforms)
-  - [Data pipelines in MMagic](#data-pipelines-in-mmagic)
-    - [A simple example of data transform](#a-simple-example-of-data-transform)
-    - [An example of BasicVSR](#an-example-of-basicvsr)
-    - [An example of Pix2Pix](#an-example-of-pix2pix)
-  - [Supported transforms in MMagic](#supported-transforms-in-mmagic)
-    - [Data loading](#data-loading)
-    - [Pre-processing](#pre-processing)
-    - [Formatting](#formatting)
-  - [Extend and use custom pipelines](#extend-and-use-custom-pipelines)
-    - [A simple example of MyTransform](#a-simple-example-of-mytransform)
-    - [An example of flipping](#an-example-of-flipping)
-
-## Data pipelines in MMagic
-
-Following typical conventions, we use `Dataset` and `DataLoader` for data loading with multiple workers. `Dataset` returns a dict of data items corresponding the arguments of models' forward method.
-
-The data preparation pipeline and the dataset is decomposed. Usually a dataset defines how to process the annotations and a data pipeline defines all the steps to prepare a data dict.
-
-A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform.
-
-The operations are categorized into data loading, pre-processing, and formatting
-
-In MMagic, all data transformations are inherited from `BaseTransform`.
-The input and output types of transformations are both dict.
-
-### A simple example of data transform
-
-```python
->>> from mmagic.transforms import LoadPairedImageFromFile
->>> transforms = LoadPairedImageFromFile(
->>>     key='pair',
->>>     domain_a='horse',
->>>     domain_b='zebra',
->>>     flag='color'),
->>> data_dict = {'pair_path': './data/pix2pix/facades/train/1.png'}
->>> data_dict = transforms(data_dict)
->>> print(data_dict.keys())
-dict_keys(['pair_path', 'pair', 'pair_ori_shape', 'img_mask', 'img_photo', 'img_mask_path', 'img_photo_path', 'img_mask_ori_shape', 'img_photo_ori_shape'])
-```
-
-Generally, the last step of the transforms pipeline must be `PackInputs`.
-`PackInputs` will pack the processed data into a dict containing two fields: `inputs` and `data_samples`.
-`inputs` is the variable you want to use as the model's input, which can be the type of `torch.Tensor`, dict of `torch.Tensor`, or any type you want.
-`data_samples` is a list of `DataSample`. Each `DataSample` contains groundtruth and necessary information for corresponding input.
-
-### An example of BasicVSR
-
-Here is a pipeline example for BasicVSR.
-
-```python
-train_pipeline = [
-    dict(type='LoadImageFromFile', key='img', channel_order='rgb'),
-    dict(type='LoadImageFromFile', key='gt', channel_order='rgb'),
-    dict(type='SetValues', dictionary=dict(scale=scale)),
-    dict(type='PairedRandomCrop', gt_patch_size=256),
-    dict(
-        type='Flip',
-        keys=['img', 'gt'],
-        flip_ratio=0.5,
-        direction='horizontal'),
-    dict(
-        type='Flip', keys=['img', 'gt'], flip_ratio=0.5, direction='vertical'),
-    dict(type='RandomTransposeHW', keys=['img', 'gt'], transpose_ratio=0.5),
-    dict(type='MirrorSequence', keys=['img', 'gt']),
-    dict(type='PackInputs')
-]
-
-val_pipeline = [
-    dict(type='GenerateSegmentIndices', interval_list=[1]),
-    dict(type='LoadImageFromFile', key='img', channel_order='rgb'),
-    dict(type='LoadImageFromFile', key='gt', channel_order='rgb'),
-    dict(type='PackInputs')
-]
-
-test_pipeline = [
-    dict(type='LoadImageFromFile', key='img', channel_order='rgb'),
-    dict(type='LoadImageFromFile', key='gt', channel_order='rgb'),
-    dict(type='MirrorSequence', keys=['img']),
-    dict(type='PackInputs')
-]
-```
-
-For each operation, we list the related dict fields that are added/updated/removed, the dict fields marked by '\*' are optional.
-
-### An example of Pix2Pix
-
-Here is a pipeline example for Pix2Pix training on aerial2maps dataset.
-
-```python
-source_domain = 'aerial'
-target_domain = 'map'
-
-pipeline = [
-    dict(
-        type='LoadPairedImageFromFile',
-        io_backend='disk',
-        key='pair',
-        domain_a=domain_a,
-        domain_b=domain_b,
-        flag='color'),
-    dict(
-        type='TransformBroadcaster',
-        mapping={'img': [f'img_{domain_a}', f'img_{domain_b}']},
-        auto_remap=True,
-        share_random_params=True,
-        transforms=[
-            dict(
-                type='mmagic.Resize', scale=(286, 286),
-                interpolation='bicubic'),
-            dict(type='mmagic.FixedCrop', crop_size=(256, 256))
-        ]),
-    dict(
-        type='Flip',
-        keys=[f'img_{domain_a}', f'img_{domain_b}'],
-        direction='horizontal'),
-    dict(
-        type='PackInputs',
-        keys=[f'img_{domain_a}', f'img_{domain_b}', 'pair'])
-```
-
-## Supported transforms in MMagic
-
-### Data loading
-
-<table class="docutils">
-   <thead>
-      <tr>
-         <th style="text-align:center">Transform</th>
-         <th style="text-align:center">Modification of Results' keys</th>
-      </tr>
-   </thead>
-   <tbody>
-      <tr>
-         <td>
-            <code>LoadImageFromFile</code>
-         </td>
-         <td>
-            - add: img, img_path, img_ori_shape, \*ori_img
-         </td>
-      </tr>
-      <tr>
-         <td>
-            <code>RandomLoadResizeBg</code>
-         </td>
-         <td>
-            - add: bg
-         </td>
-      </tr>
-      <tr>
-         <td>
-            <code>LoadMask</code>
-         </td>
-         <td>
-            - add: mask
-         </td>
-      </tr>
-      <tr>
-         <td>
-            <code>GetSpatialDiscountMask</code>
-         </td>
-         <td>
-            - add: discount_mask
-         </td>
-      </tr>
-   </tbody>
-</table>
-
-### Pre-processing
-
-<table class="docutils">
-   <thead>
-      <tr>
-         <th style="text-align:center">Transform</th>
-         <th style="text-align:center">Modification of Results' keys</th>
-      </tr>
-   </thead>
-   <tbody>
-      <tr>
-         <td>
-            <code>Resize</code>
-         </td >
-         <td>
-            - add: scale_factor, keep_ratio, interpolation, backend
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>MATLABLikeResize</code>
-         </td >
-         <td>
-            - add: scale, output_shape
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomRotation</code>
-         </td >
-         <td>
-            - add: degrees
-            - update: specified by <code>keys</code>
-         <td>
-      </tr>
-      <tr>
-         <td>
-            <code>Flip</code>
-         </td >
-         <td>
-            - add: flip, flip_direction
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomAffine</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomJitter</code>
-         </td >
-         <td>
-            - update: fg (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>ColorJitter</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>BinarizeImage</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomMaskDilation</code>
-         </td >
-         <td>
-            - add: img_dilate_kernel_size
-         <td>
-      </tr>
-      <tr>
-         <td>
-            <code>RandomTransposeHW</code>
-         </td >
-         <td>
-            - add: transpose
-      </tr>
-      <tr>
-         <td>
-            <code>RandomDownSampling</code>
-         </td >
-         <td>
-            - update: scale, gt (img), lq (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomBlur</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomResize</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomNoise</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomJPEGCompression</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomVideoCompression</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>DegradationsWithShuffle</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GenerateFrameIndices</code>
-         </td >
-         <td>
-            - update: img_path (gt_path, lq_path)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GenerateFrameIndiceswithPadding</code>
-         </td >
-         <td>
-            - update: img_path (gt_path, lq_path)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>TemporalReverse</code>
-         </td >
-         <td>
-            - add: reverse
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GenerateSegmentIndices</code>
-         </td >
-         <td>
-            - add: interval
-            - update: img_path (gt_path, lq_path)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>MirrorSequence</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>CopyValues</code>
-         </td >
-         <td>
-            - add: specified by <code>dst_key</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>UnsharpMasking</code>
-         </td >
-         <td>
-            - add: img_unsharp
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>Crop</code>
-         </td >
-         <td>
-            - add: img_crop_bbox, crop_size
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RandomResizedCrop</code>
-         </td >
-         <td>
-            - add: img_crop_bbox
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>FixedCrop</code>
-         </td >
-         <td>
-            - add: crop_size, crop_pos
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>PairedRandomCrop</code>
-         </td >
-         <td>
-            - update: gt (img), lq (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>CropAroundCenter</code>
-         </td >
-         <td>
-            - add: crop_bbox
-            - update: fg (img), alpha (img), trimap (img), bg (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>CropAroundUnknown</code>
-         </td >
-         <td>
-            - add: crop_bbox
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>CropAroundFg</code>
-         </td >
-         <td>
-            - add: crop_bbox
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>ModCrop</code>
-         </td >
-         <td>
-            - update: gt (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>CropLike</code>
-         </td >
-         <td>
-            - update: specified by <code>target_key</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GetMaskedImage</code>
-         </td >
-         <td>
-            - add: masked_img
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GenerateFacialHeatmap</code>
-         </td >
-         <td>
-            - add: heatmap
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>GenerateCoordinateAndCell</code>
-         </td >
-         <td>
-            - add: coord, cell
-            - update: gt (img)
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>Normalize</code>
-         </td >
-         <td>
-            - add: img_norm_cfg
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-      <tr>
-         <td>
-            <code>RescaleToZeroOne</code>
-         </td >
-         <td>
-            - update: specified by <code>keys</code>
-         </td >
-      </tr>
-   </tbody>
-</table>
-
-### Formatting
-
-<table class="docutils">
-   <thead>
-      <tr>
-         <th style="text-align:center">Transform</th>
-         <th style="text-align:center">Modification of Results' keys</th>
-      </tr>
-   </thead>
-   <tbody>
-      <tr>
-         <td>
-            <code>ToTensor</code>
-         </td>
-         <td>
-            update: specified by <code>keys</code>.
-         </td>
-      </tr>
-      <tr>
-         <td>
-            <code>FormatTrimap</code>
-         </td>
-         <td>
-            - update: trimap
-         </td>
-      </tr>
-      <tr>
-         <td>
-            <code>PackInputs</code>
-         </td>
-         <td>
-            - add: inputs, data_sample
-            - remove: all other keys
-         </td>
-      </tr>
-   </tbody>
-</table>
-
-### Albumentations
-
-MMagic support adding custom transformations from [Albumentations](https://github.com/albumentations-team/albumentations) library. Please visit https://albumentations.ai/docs/getting_started/transforms_and_targets to get more information.
-
-An example of Albumentations's `transforms` is as followed:
-
-```python
-albu_transforms = [
-   dict(
-         type='Resize',
-         height=100,
-         width=100,
-   ),
-   dict(
-         type='RandomFog',
-         p=0.5,
-   ),
-   dict(
-         type='RandomRain',
-         p=0.5
-   ),
-   dict(
-         type='RandomSnow',
-         p=0.5,
-   ),
-]
-pipeline = [
-   dict(
-         type='LoadImageFromFile',
-         key='img',
-         color_type='color',
-         channel_order='rgb',
-         imdecode_backend='cv2'),
-   dict(
-         type='Albumentations',
-         keys=['img'],
-         transforms=albu_transforms),
-   dict(type='PackInputs')
-]
-```
-
-## Extend and use custom pipelines
-
-### A simple example of MyTransform
-
-1. Write a new pipeline in a file, e.g., in `my_pipeline.py`. It takes a dict as input and returns a dict.
-
-```python
-import random
-from mmcv.transforms import BaseTransform
-from mmagic.registry import TRANSFORMS
-
-
-@TRANSFORMS.register_module()
-class MyTransform(BaseTransform):
-    """Add your transform
-
-    Args:
-        p (float): Probability of shifts. Default 0.5.
-    """
-
-    def __init__(self, p=0.5):
-        self.p = p
-
-    def transform(self, results):
-        if random.random() > self.p:
-            results['dummy'] = True
-        return results
-
-    def __repr__(self):
-
-        repr_str = self.__class__.__name__
-        repr_str += (f'(p={self.p})')
-
-        return repr_str
-```
-
-2. Import and use the pipeline in your config file.
-
-Make sure the import is relative to where your train script is located.
-
-```python
-train_pipeline = [
-    ...
-    dict(type='MyTransform', p=0.2),
-    ...
-]
-```
-
-### An example of flipping
-
-Here we use a simple flipping transformation as example:
-
-```python
-import random
-import mmcv
-from mmcv.transforms import BaseTransform, TRANSFORMS
-
-@TRANSFORMS.register_module()
-class MyFlip(BaseTransform):
-    def __init__(self, direction: str):
-        super().__init__()
-        self.direction = direction
-
-    def transform(self, results: dict) -> dict:
-        img = results['img']
-        results['img'] = mmcv.imflip(img, direction=self.direction)
-        return results
-```
-
-Thus, we can instantiate a `MyFlip` object and use it to process the data dict.
-
-```python
-import numpy as np
-
-transform = MyFlip(direction='horizontal')
-data_dict = {'img': np.random.rand(224, 224, 3)}
-data_dict = transform(data_dict)
-processed_img = data_dict['img']
-```
-
-Or, we can use `MyFlip` transformation in data pipeline in our config file.
-
-```python
-pipeline = [
-    ...
-    dict(type='MyFlip', direction='horizontal'),
-    ...
-]
-```
-
-Note that if you want to use `MyFlip` in config, you must ensure the file containing `MyFlip` is imported during the program run.
diff --git a/docs/en/index.rst b/docs/en/index.rst
deleted file mode 100644
index 2df548dcf3..0000000000
--- a/docs/en/index.rst
+++ /dev/null
@@ -1,179 +0,0 @@
-Welcome to MMagic's documentation!
-=====================================
-
-Languages:
-`English <https://mmagic.readthedocs.io/en/latest/>`_
-|
-`简体中文 <https://mmagic.readthedocs.io/zh_CN/latest/>`_
-
-MMagic (**M**\ultimodal **A**\dvanced, **G**\enerative, and **I**\ntelligent **C**\reation) is an open-source AIGC toolbox for professional AI researchers and machine learning engineers to explore image and video processing, editing and generation.
-
-MMagic supports various foundamental generative models, including:
-
-* Unconditional Generative Adversarial Networks (GANs)
-* Conditional Generative Adversarial Networks (GANs)
-* Internal Learning
-* Diffusion Models
-* And many other generative models are coming soon!
-
-MMagic supports various applications, including:
-
-- Text-to-Image
-- Image-to-image translation
-- 3D-aware generation
-- Image super-resolution
-- Video super-resolution
-- Video frame interpolation
-- Image inpainting
-- Image matting
-- Image restoration
-- Image colorization
-- Image generation
-- And many other applications are coming soon!
-
-MMagic is based on `PyTorch <https://pytorch.org>`_ and is a part of the `OpenMMLab project <https://openmmlab.com/>`_.
-Codes are available on `GitHub <https://github.com/open-mmlab/mmagic>`_.
-
-
-Documentation
-=============
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Community
-
-   community/contributing.md
-   community/projects.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Get Started
-
-   get_started/overview.md
-   get_started/install.md
-   get_started/quick_run.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: User Guides
-
-   user_guides/config.md
-   user_guides/dataset_prepare.md
-   user_guides/inference.md
-   user_guides/train_test.md
-   user_guides/metrics.md
-   user_guides/visualization.md
-   user_guides/useful_tools.md
-   user_guides/deploy.md
-
-
-.. toctree::
-   :maxdepth: 2
-   :caption: Advanced Guides
-
-   advanced_guides/models.md
-   advanced_guides/dataset.md
-   advanced_guides/transforms.md
-   advanced_guides/losses.md
-   advanced_guides/evaluator.md
-   advanced_guides/structures.md
-   advanced_guides/data_preprocessor.md
-   advanced_guides/data_flow.md
-
-
-.. toctree::
-   :maxdepth: 2
-   :caption: How To
-
-   howto/models.md
-   howto/dataset.md
-   howto/transforms.md
-   howto/losses.md
-
-.. toctree::
-   :maxdepth: 1
-   :caption: FAQ
-
-   faq.md
-
-.. toctree::
-   :maxdepth: 2
-   :caption: Model Zoo
-
-   model_zoo/index.rst
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Dataset Zoo
-
-   dataset_zoo/index.rst
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Changelog
-
-   changelog.md
-
-.. toctree::
-   :maxdepth: 2
-   :caption: API Reference
-
-   mmagic.apis.inferencers <autoapi/mmagic/apis/inferencers/index.rst>
-   mmagic.structures <autoapi/mmagic/structures/index.rst>
-   mmagic.datasets <autoapi/mmagic/datasets/index.rst>
-   mmagic.datasets.transforms <autoapi/mmagic/datasets/transforms/index.rst>
-   mmagic.evaluation <autoapi/mmagic/evaluation/index.rst>
-   mmagic.visualization <autoapi/mmagic/visualization/index.rst>
-   mmagic.engine.hooks <autoapi/mmagic/engine/hooks/index.rst>
-   mmagic.engine.logging <autoapi/mmagic/engine/logging/index.rst>
-   mmagic.engine.optimizers <autoapi/mmagic/engine/optimizers/index.rst>
-   mmagic.engine.runner <autoapi/mmagic/engine/runner/index.rst>
-   mmagic.engine.schedulers <autoapi/mmagic/engine/schedulers/index.rst>
-   mmagic.models.archs <autoapi/mmagic/models/archs/index.rst>
-   mmagic.models.base_models <autoapi/mmagic/models/base_models/index.rst>
-   mmagic.models.losses <autoapi/mmagic/models/losses/index.rst>
-   mmagic.models.data_preprocessors <autoapi/mmagic/models/data_preprocessors/index.rst>
-   mmagic.models.utils <autoapi/mmagic/models/losses/utils.rst>
-   mmagic.models.editors <autoapi/mmagic/models/editors/index.rst>
-   mmagic.utils <autoapi/mmagic/utils/index.rst>
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Migration from MMEdit 0.x
-
-   migration/overview.md
-   migration/runtime.md
-   migration/models.md
-   migration/eval_test.md
-   migration/schedule.md
-   migration/data.md
-   migration/distributed_train.md
-   migration/optimizers.md
-   migration/visualization.md
-   migration/amp.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: Device Support
-
-   device/npu.md
-
-
-.. toctree::
-   :caption: Switch Language
-
-   switch_language.md
-
-
-
-Indices and tables
-==================
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
diff --git a/docs/en/make.bat b/docs/en/make.bat
deleted file mode 100644
index 8a3a0e25b4..0000000000
--- a/docs/en/make.bat
+++ /dev/null
@@ -1,36 +0,0 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=.
-set BUILDDIR=_build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
diff --git a/docs/en/migration/amp.md b/docs/en/migration/amp.md
deleted file mode 100644
index 416a7eafea..0000000000
--- a/docs/en/migration/amp.md
+++ /dev/null
@@ -1,152 +0,0 @@
-# Migration of AMP Training
-
-In 0.x, MMEditing do not support AMP training for the entire forward process.
-Instead, users must use `auto_fp16` decorator to warp the specific submodule and convert the parameter of submodule to fp16.
-This allows for fine-grained control of the model parameters, but is more cumbersome to use.
-In addition, users need to handle operations such as scaling of the loss function during the training process by themselves.
-
-MMagic 1.x use `AmpOptimWrapper` provided by MMEngine.
-In `AmpOptimWrapper.update_params`, gradient scaling and `GradScaler` updating is automatically performed.
-And in `optim_context` context manager, `auto_cast` is applied to the entire forward process.
-
-Specifically, the difference between the 0.x and 1.x is as follows:
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> 0.x version </th>
-    <th> 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-# config
-runner = dict(fp16_loss_scaler=dict(init_scale=512))
-```
-
-```python
-# code
-import torch.nn as nn
-from mmedit.models.builder import build_model
-from mmedit.core.runners.fp16_utils import auto_fp16
-
-
-class DemoModule(nn.Module):
-    def __init__(self, cfg):
-        self.net = build_model(cfg)
-
-    @auto_fp16
-    def forward(self, x):
-        return self.net(x)
-
-class DemoModel(nn.Module):
-
-    def __init__(self, cfg):
-        super().__init__(self)
-        self.demo_network = DemoModule(cfg)
-
-    def train_step(self,
-                   data_batch,
-                   optimizer,
-                   ddp_reducer=None,
-                   loss_scaler=None,
-                   use_apex_amp=False,
-                   running_status=None):
-        # get data from data_batch
-        inputs = data_batch['img']
-        output = self.demo_network(inputs)
-
-        optimizer.zero_grad()
-        loss, log_vars = self.get_loss(data_dict_)
-
-        if ddp_reducer is not None:
-            ddp_reducer.prepare_for_backward(_find_tensors(loss_disc))
-
-        if loss_scaler:
-            # add support for fp16
-            loss_scaler.scale(loss_disc).backward()
-        elif use_apex_amp:
-            from apex import amp
-            with amp.scale_loss(loss_disc, optimizer,
-                    loss_id=0) as scaled_loss_disc:
-                scaled_loss_disc.backward()
-        else:
-            loss_disc.backward()
-
-        if loss_scaler:
-            loss_scaler.unscale_(optimizer)
-            loss_scaler.step(optimizer)
-        else:
-            optimizer.step()
-```
-
-</td>
-
-<td valign="top">
-
-```python
-# config
-optim_wrapper = dict(
-    constructor='OptimWrapperConstructor',
-    generator=dict(
-        accumulative_counts=8,
-        optimizer=dict(type='Adam', lr=0.0001, betas=(0.0, 0.999), eps=1e-06),
-        type='AmpOptimWrapper',  # use amp wrapper
-        loss_scale='dynamic'),
-    discriminator=dict(
-        accumulative_counts=8,
-        optimizer=dict(type='Adam', lr=0.0004, betas=(0.0, 0.999), eps=1e-06),
-        type='AmpOptimWrapper',  # use amp wrapper
-        loss_scale='dynamic'))
-```
-
-```python
-# code
-import torch.nn as nn
-from mmagic.registry import MODULES
-from mmengine.model import BaseModel
-
-
-class DemoModule(nn.Module):
-    def __init__(self, cfg):
-        self.net = MODULES.build(cfg)
-
-    def forward(self, x):
-        return self.net(x)
-
-class DemoModel(BaseModel):
-    def __init__(self, cfg):
-        super().__init__(self)
-        self.demo_network = DemoModule(cfg)
-
-    def train_step(self, data, optim_wrapper):
-        # get data from data_batch
-        data = self.data_preprocessor(data, True)
-        inputs = data['inputs']
-
-        with optim_wrapper.optim_context(self.discriminator):
-            output = self.demo_network(inputs)
-        loss_dict = self.get_loss(output)
-        # use parse_loss provide by `BaseModel`
-        loss, log_vars = self.parse_loss(loss_dict)
-        optimizer_wrapper.update_params(loss)
-
-        return log_vars
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-To avoid user modifications to the configuration file, MMagic provides the `--amp` option in `train.py`, which allows the user to start AMP training without modifying the configuration file.
-Users can start AMP training by following command:
-
-```bash
-bash tools/dist_train.sh CONFIG GPUS --amp
-
-# for slurm users
-bash tools/slurm_train.sh PARTITION JOB_NAME CONFIG WORK_DIR --amp
-```
diff --git a/docs/en/migration/data.md b/docs/en/migration/data.md
deleted file mode 100644
index d534391932..0000000000
--- a/docs/en/migration/data.md
+++ /dev/null
@@ -1,233 +0,0 @@
-# Migration of Data Settings
-
-This section introduces the migration of data settings:
-
-- [Migration of Data Settings](#migration-of-data-settings)
-  - [Data pipelines](#data-pipelines)
-  - [Dataloader](#dataloader)
-
-## Data pipelines
-
-We update data pipelines settings in MMagic 1.x. Important modifications are as following.
-
-- Remove normalization and color space transforms operations. They are moved from datasets transforms pipelines to data_preprocessor.
-- The original formatting transforms pipelines `Collect` and `ToTensor` are combined as `PackInputs`.
-  More details of data pipelines are shown in [transform guides](../howto/transforms.md).
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-train_pipeline = [  # Training data processing pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        io_backend='disk',  # io backend
-        key='lq',  # Keys in results to find corresponding path
-        flag='unchanged'),  # flag for reading images
-    dict(type='LoadImageFromFile',  # Load images from files
-        io_backend='disk',  # io backend
-        key='gt',  # Keys in results to find corresponding path
-        flag='unchanged'),  # flag for reading images
-    dict(type='RescaleToZeroOne', keys=['lq', 'gt']),  # Rescale images from [0, 255] to [0, 1]
-    dict(type='Normalize',  # Augmentation pipeline that normalize the input images
-        keys=['lq', 'gt'],  # Images to be normalized
-        mean=[0, 0, 0],  # Mean values
-        std=[1, 1, 1],  # Standard variance
-        to_rgb=True),  # Change to RGB channel
-    dict(type='PairedRandomCrop', gt_patch_size=96),  # Paired random crop
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='horizontal'),  # Flip direction
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='vertical'),  # Flip direction
-    dict(type='RandomTransposeHW',  # Random transpose h and w for images
-        keys=['lq', 'gt'],  # Images to be transposed
-        transpose_ratio=0.5  # Transpose ratio
-        ),
-    dict(type='Collect',  # Pipeline that decides which keys in the data should be passed to the model
-        keys=['lq', 'gt'],  # Keys to pass to the model
-        meta_keys=['lq_path', 'gt_path']), # Meta information keys. In training, meta information is not needed
-    dict(type='ToTensor',  # Convert images to tensor
-        keys=['lq', 'gt'])  # Images to be converted to Tensor
-]
-test_pipeline = [  # Test pipeline
-    dict(
-        type='LoadImageFromFile',  # Load images from files
-        io_backend='disk',  # io backend
-        key='lq',  # Keys in results to find corresponding path
-        flag='unchanged'),  # flag for reading images
-    dict(
-        type='LoadImageFromFile',  # Load images from files
-        io_backend='disk',  # io backend
-        key='gt',  # Keys in results to find corresponding path
-        flag='unchanged'),  # flag for reading images
-    dict(type='RescaleToZeroOne', keys=['lq', 'gt']),  # Rescale images from [0, 255] to [0, 1]
-    dict(
-        type='Normalize',  # Augmentation pipeline that normalize the input images
-        keys=['lq', 'gt'],  # Images to be normalized
-        mean=[0, 0, 0],  # Mean values
-        std=[1, 1, 1],  # Standard variance
-        to_rgb=True),  # Change to RGB channel
-    dict(type='Collect',  # Pipeline that decides which keys in the data should be passed to the model
-        keys=['lq', 'gt'],  # Keys to pass to the model
-        meta_keys=['lq_path', 'gt_path']),  # Meta information keys
-    dict(type='ToTensor',  # Convert images to tensor
-        keys=['lq', 'gt'])  # Images to be converted to Tensor
-]
-```
-
-</td>
-
-<td valign="top">
-
-```python
-train_pipeline = [  # Training data processing pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='SetValues', dictionary=dict(scale=scale)),  # Set value to destination keys
-    dict(type='PairedRandomCrop', gt_patch_size=96),  # Paired random crop
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='horizontal'),  # Flip direction
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='vertical'),  # Flip direction
-    dict(type='RandomTransposeHW',  # Random transpose h and w for images
-        keys=['lq', 'gt'],  # Images to be transposed
-        transpose_ratio=0.5  # Transpose ratio
-        ),
-    dict(type='PackInputs')  # The config of collecting data from current pipeline
-]
-test_pipeline = [  # Test pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='PackInputs')  # The config of collecting data from current pipeline
-]
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-## Dataloader
-
-We update dataloader settings in MMagic 1.x. Important modifications are as following.
-
-- The original `data` field is split to `train_dataloader`, `val_dataloader` and `test_dataloader`. This allows us to configure them in fine-grained. For example, you can specify different sampler and batch size during training and test.
-- The `samples_per_gpu` is renamed to `batch_size`.
-- The `workers_per_gpu` is renamed to `num_workers`.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-data = dict(
-    # train
-    samples_per_gpu=16,  # Batch size of a single GPU
-    workers_per_gpu=4,  # Worker to pre-fetch data for each single GPU
-    drop_last=True,  # Use drop_last in data_loader
-    train=dict(  # Train dataset config
-        type='RepeatDataset',  # Repeated dataset for iter-based model
-        times=1000,  # Repeated times for RepeatDataset
-        dataset=dict(
-            type=train_dataset_type,  # Type of dataset
-            lq_folder='data/DIV2K/DIV2K_train_LR_bicubic/X2_sub',  # Path for lq folder
-            gt_folder='data/DIV2K/DIV2K_train_HR_sub',  # Path for gt folder
-            ann_file='data/DIV2K/meta_info_DIV2K800sub_GT.txt',  # Path for annotation file
-            pipeline=train_pipeline,  # See above for train_pipeline
-            scale=scale)),  # Scale factor for upsampling
-    # val
-    val_samples_per_gpu=1,  # Batch size of a single GPU for validation
-    val_workers_per_gpu=4,  # Worker to pre-fetch data for each single GPU for validation
-    val=dict(
-        type=val_dataset_type,  # Type of dataset
-        lq_folder='data/val_set5/Set5_bicLRx2',  # Path for lq folder
-        gt_folder='data/val_set5/Set5_mod12',  # Path for gt folder
-        pipeline=test_pipeline,  # See above for test_pipeline
-        scale=scale,  # Scale factor for upsampling
-        filename_tmpl='{}'),  # filename template
-    # test
-    test=dict(
-        type=val_dataset_type,  # Type of dataset
-        lq_folder='data/val_set5/Set5_bicLRx2',  # Path for lq folder
-        gt_folder='data/val_set5/Set5_mod12',  # Path for gt folder
-        pipeline=test_pipeline,  # See above for test_pipeline
-        scale=scale,  # Scale factor for upsampling
-        filename_tmpl='{}'))  # filename template
-```
-
-</td>
-
-<td valign="top">
-
-```python
-dataset_type = 'BasicImageDataset'  # The type of dataset
-data_root = 'data'  # Root path of data
-train_dataloader = dict(
-    batch_size=16,
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    sampler=dict(type='InfiniteSampler', shuffle=True),  # The type of data sampler
-    dataset=dict(  # Train dataset config
-        type=dataset_type,  # Type of dataset
-        ann_file='meta_info_DIV2K800sub_GT.txt',  # Path of annotation file
-        metainfo=dict(dataset_type='div2k', task_name='sisr'),
-        data_root=data_root + '/DIV2K',  # Root path of data
-        data_prefix=dict(  # Prefix of image path
-            img='DIV2K_train_LR_bicubic/X2_sub', gt='DIV2K_train_HR_sub'),
-        filename_tmpl=dict(img='{}', gt='{}'),  # Filename template
-        pipeline=train_pipeline))
-val_dataloader = dict(
-    batch_size=1,
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    drop_last=False,  # Whether drop the last incomplete batch
-    sampler=dict(type='DefaultSampler', shuffle=False),  # The type of data sampler
-    dataset=dict(  # Validation dataset config
-        type=dataset_type,  # Type of dataset
-        metainfo=dict(dataset_type='set5', task_name='sisr'),
-        data_root=data_root + '/Set5',  # Root path of data
-        data_prefix=dict(img='LRbicx2', gt='GTmod12'),  # Prefix of image path
-        pipeline=test_pipeline))
-test_dataloader = val_dataloader
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
diff --git a/docs/en/migration/distributed_train.md b/docs/en/migration/distributed_train.md
deleted file mode 100644
index 2bec319093..0000000000
--- a/docs/en/migration/distributed_train.md
+++ /dev/null
@@ -1,80 +0,0 @@
-# Migration of Distributed Training Settings
-
-We have merged [MMGeneration 1.x](https://github.com/open-mmlab/mmgeneration/tree/1.x) into MMagic. Here is migration of Distributed Training Settings about MMGeneration.
-
-In 0.x version, MMGeneration uses `DDPWrapper` and `DynamicRunner` to train static and dynamic model (e.g., PGGAN and StyleGANv2) respectively. In 1.x version, we use `MMSeparateDistributedDataParallel` provided by MMEngine to implement distributed training.
-
-The configuration differences are shown below:
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Static Model in 0.x Version </th>
-    <th> Static Model in 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-# Use DDPWrapper
-use_ddp_wrapper = True
-find_unused_parameters = False
-
-runner = dict(
-    type='DynamicIterBasedRunner',
-    is_dynamic_ddp=False)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-model_wrapper_cfg = dict(
-    type='MMSeparateDistributedDataParallel',
-    broadcast_buffers=False,
-    find_unused_parameters=False)
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Dynamic Model in 0.x Version </th>
-    <th> Dynamic Model in 1.x Version </th>
-<tbody>
-<tr>
-
-<td valign="top">
-
-```python
-use_ddp_wrapper = False
-find_unused_parameters = False
-
-# Use DynamicRunner
-runner = dict(
-    type='DynamicIterBasedRunner',
-    is_dynamic_ddp=True)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-model_wrapper_cfg = dict(
-    type='MMSeparateDistributedDataParallel',
-    broadcast_buffers=False,
-    find_unused_parameters=True) # set `find_unused_parameters` for dynamic models
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
diff --git a/docs/en/migration/eval_test.md b/docs/en/migration/eval_test.md
deleted file mode 100644
index 55cebaa51c..0000000000
--- a/docs/en/migration/eval_test.md
+++ /dev/null
@@ -1,156 +0,0 @@
-# Migration of Evaluation and Testing Settings
-
-We update evaluation settings in MMagic 1.x. Important modifications are as following.
-
-- The evaluation field is split to `val_evaluator` and `test_evaluator`. The `interval` is moved to `train_cfg.val_interval`.
-- The metrics to evaluation are moved from `test_cfg` to `val_evaluator` and `test_evaluator`.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-train_cfg = None  # Training config
-test_cfg = dict(  # Test config
-    metrics=['PSNR'],  # Metrics used during testing
-    crop_border=scale)  # Crop border during evaluation
-
-evaluation = dict(  # The config to build the evaluation hook
-    interval=5000,  # Evaluation interval
-    save_image=True,  # Save images during evaluation
-    gpu_collect=True)  # Use gpu collect
-```
-
-</td>
-
-<td valign="top">
-
-```python
-val_evaluator = [
-    dict(type='PSNR', crop_border=scale),  # The name of metrics to evaluate
-]
-test_evaluator = val_evaluator
-
-train_cfg = dict(
-    type='IterBasedTrainLoop', max_iters=300000, val_interval=5000)  # Config of train loop type
-val_cfg = dict(type='ValLoop')  # The name of validation loop type
-test_cfg = dict(type='TestLoop')  # The name of test loop type
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-We have merged [MMGeneration 1.x](https://github.com/open-mmlab/mmgeneration/tree/1.x) into MMagic. Here is migration of Evaluation and Testing Settings about MMGeneration.
-
-The evaluation field is splited to `val_evaluator` and `test_evaluator`. And it won't support `interval` and `save_best` arguments. The `interval` is moved to `train_cfg.val_interval`, see [the schedule settings](./schedule.md) and the `save_best` is moved to `default_hooks.checkpoint.save_best`.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> 0.x Version </th>
-    <th> 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-evaluation = dict(
-    type='GenerativeEvalHook',
-    interval=10000,
-    metrics=[
-        dict(
-            type='FID',
-            num_images=50000,
-            bgr2rgb=True,
-            inception_args=dict(type='StyleGAN')),
-        dict(type='IS', num_images=50000)
-    ],
-    best_metric=['fid', 'is'],
-    sample_kwargs=dict(sample_model='ema'))
-```
-
-</td>
-
-<td valign="top">
-
-```python
-val_evaluator = dict(
-    type='Evaluator',
-    metrics=[
-        dict(
-            type='FID',
-            prefix='FID-Full-50k',
-            fake_nums=50000,
-            inception_style='StyleGAN',
-            sample_model='orig')
-        dict(
-            type='IS',
-            prefix='IS-50k',
-            fake_nums=50000)])
-# set best config
-default_hooks = dict(
-    checkpoint=dict(
-        type='CheckpointHook',
-        interval=10000,
-        by_epoch=False,
-        less_keys=['FID-Full-50k/fid'],
-        greater_keys=['IS-50k/is'],
-        save_optimizer=True,
-        save_best=['FID-Full-50k/fid', 'IS-50k/is'],
-        rule=['less', 'greater']))
-test_evaluator = val_evaluator
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-To evaluate and test the model correctly, we need to set specific loop in `val_cfg` and `test_cfg`.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Static Model in 0.x Version </th>
-    <th> Static Model in 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-total_iters = 1000000
-
-runner = dict(
-    type='DynamicIterBasedRunner',
-    is_dynamic_ddp=False,
-    pass_training_status=True)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-train_cfg = dict(
-    by_epoch=False,  # use iteration based training
-    max_iters=1000000,  # max training iteration
-    val_begin=1,
-    val_interval=10000)  # evaluation interval
-val_cfg = dict(type='MultiValLoop')  # specific loop in validation
-test_cfg = dict(type='MultiTestLoop')  # specific loop in testing
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
diff --git a/docs/en/migration/models.md b/docs/en/migration/models.md
deleted file mode 100644
index 801a58e1ae..0000000000
--- a/docs/en/migration/models.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Migration of Model Settings
-
-We update model settings in MMagic 1.x. Important modifications are as following.
-
-- Remove `pretrained` fields.
-- Add `train_cfg` and `test_cfg` fields in model settings.
-- Add `data_preprocessor` fields. Normalization and color space transforms operations are moved from datasets transforms pipelines to data_preprocessor. We will introduce data_preprocessor later.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-model = dict(
-    type='BasicRestorer',  # Name of the model
-    generator=dict(  # Config of the generator
-        type='EDSR',  # Type of the generator
-        in_channels=3,  # Channel number of inputs
-        out_channels=3,  # Channel number of outputs
-        mid_channels=64,  # Channel number of intermediate features
-        num_blocks=16,  # Block number in the trunk network
-        upscale_factor=scale, # Upsampling factor
-        res_scale=1,  # Used to scale the residual in residual block
-        rgb_mean=(0.4488, 0.4371, 0.4040),  # Image mean in RGB orders
-        rgb_std=(1.0, 1.0, 1.0)),  # Image std in RGB orders
-    pretrained=None,
-    pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean'))  # Config for pixel loss model training and testing settings
-```
-
-</td>
-
-<td valign="top">
-
-```python
-model = dict(
-    type='BaseEditModel',  # Name of the model
-    generator=dict(  # Config of the generator
-        type='EDSRNet',  # Type of the generator
-        in_channels=3,  # Channel number of inputs
-        out_channels=3,  # Channel number of outputs
-        mid_channels=64,  # Channel number of intermediate features
-        num_blocks=16,  # Block number in the trunk network
-        upscale_factor=scale, # Upsampling factor
-        res_scale=1,  # Used to scale the residual in residual block
-        rgb_mean=(0.4488, 0.4371, 0.4040),  # Image mean in RGB orders
-        rgb_std=(1.0, 1.0, 1.0)),  # Image std in RGB orders
-    pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean')  # Config for pixel loss
-    train_cfg=dict(),  # Config of training model.
-    test_cfg=dict(),  # Config of testing model.
-    data_preprocessor=dict(  # The Config to build data preprocessor
-        type='DataPreprocessor', mean=[0., 0., 0.], std=[255., 255.,
-                                                             255.]))
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-We refactor models in MMagic 1.x. Important modifications are as following.
-
-- The `models` in MMagic 1.x is refactored to six parts: `archs`, `base_models`, `data_preprocessors`, `editors`, `diffusion_schedulers` and `losses`.
-- Add `data_preprocessor` module in `models`. Normalization and color space transforms operations are moved from datasets transforms pipelines to data_preprocessor. The data out from the data pipeline is transformed by this module and then fed into the model.
-
-More details of models are shown in [model guides](../howto/models.md).
diff --git a/docs/en/migration/optimizers.md b/docs/en/migration/optimizers.md
deleted file mode 100644
index 7d364388bf..0000000000
--- a/docs/en/migration/optimizers.md
+++ /dev/null
@@ -1,156 +0,0 @@
-# Migration of Optimizers
-
-We have merged [MMGeneration 1.x](https://github.com/open-mmlab/mmgeneration/tree/1.x) into MMagic. Here is migration of Optimizers about MMGeneration.
-
-In version 0.x, MMGeneration uses PyTorch's native Optimizer, which only provides general parameter optimization.
-In version 1.x, we use `OptimizerWrapper` provided by MMEngine.
-
-Compared to PyTorch's `Optimizer`, `OptimizerWrapper` supports the following features:
-
-- `OptimizerWrapper.update_params` implement `zero_grad`, `backward` and `step` in a single function.
-- Support gradient accumulation automatically.
-- Provide a context manager named `OptimizerWrapper.optim_context` to warp the forward process. `optim_context` can automatically call `torch.no_sync` according to current number of updating iteration. In AMP (auto mixed precision) training, `autocast` is called in `optim_context` as well.
-
-For GAN models, generator and discriminator use different optimizer and training schedule.
-To ensure that the GAN model's function signature of `train_step` is consistent with other models, we use `OptimWrapperDict`, inherited from `OptimizerWrapper`, to wrap the optimizer of the generator and discriminator.
-To automate this process MMagic implement `MultiOptimWrapperContructor`.
-And you should specify this constructor in your config is you want to train GAN model.
-
-The config for the 0.x and 1.x versions are shown below:
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> 0.x Version </th>
-    <th> 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-optimizer = dict(
-    generator=dict(type='Adam', lr=0.0001, betas=(0.0, 0.999), eps=1e-6),
-    discriminator=dict(type='Adam', lr=0.0004, betas=(0.0, 0.999), eps=1e-6))
-```
-
-</td>
-
-<td valign="top">
-
-```python
-optim_wrapper = dict(
-    constructor='MultiOptimWrapperConstructor',
-    generator=dict(optimizer=dict(type='Adam', lr=0.0002, betas=(0.0, 0.999), eps=1e-6)),
-    discriminator=dict(
-        optimizer=dict(type='Adam', lr=0.0004, betas=(0.0, 0.999), eps=1e-6)))
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-> Note that, in the 1.x, MMGeneration uses `OptimWrapper` to realize gradient accumulation. This make the config of `discriminator_steps` (training trick for updating the generator once after multiple updates of the discriminator) and gradient accumulation different between 0.x and 1.x version.
-
-- In 0.x version,  we use `disc_steps`, `gen_steps` and `batch_accumulation_steps` in configs. `disc_steps` and `batch_accumulation_steps` are counted by the number of calls of `train_step` (is also the number of data reads from the dataloader). Therefore the number of consecutive updates of the discriminator is `disc_steps // batch_accumulation_steps`. And for generators, `gen_steps` is the number of times the generator actually updates continuously.
-- In 1.x version, we use `discriminator_steps`, `generator_steps` and `accumulative_counts` in configs. `discriminator_steps` and `generator_steps` are the number of consecutive updates to itself before updating other modules.
-
-Take config of BigGAN-128 as example.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> 0.x Version </th>
-    <th> 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-model = dict(
-    type='BasiccGAN',
-    generator=dict(
-        type='BigGANGenerator',
-        output_scale=128,
-        noise_size=120,
-        num_classes=1000,
-        base_channels=96,
-        shared_dim=128,
-        with_shared_embedding=True,
-        sn_eps=1e-6,
-        init_type='ortho',
-        act_cfg=dict(type='ReLU', inplace=True),
-        split_noise=True,
-        auto_sync_bn=False),
-    discriminator=dict(
-        type='BigGANDiscriminator',
-        input_scale=128,
-        num_classes=1000,
-        base_channels=96,
-        sn_eps=1e-6,
-        init_type='ortho',
-        act_cfg=dict(type='ReLU', inplace=True),
-        with_spectral_norm=True),
-    gan_loss=dict(type='GANLoss', gan_type='hinge'))
-
-# continuous update discriminator for `disc_steps // batch_accumulation_steps = 8 // 8 = 1` times
-# continuous update generator for `gen_steps = 1` times
-# generators and discriminators perform `batch_accumulation_steps = 8` times gradient accumulations before each update
-train_cfg = dict(
-    disc_steps=8, gen_steps=1, batch_accumulation_steps=8, use_ema=True)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-model = dict(
-    type='BigGAN',
-    num_classes=1000,
-    data_preprocessor=dict(type='DataPreprocessor'),
-    generator=dict(
-        type='BigGANGenerator',
-        output_scale=128,
-        noise_size=120,
-        num_classes=1000,
-        base_channels=96,
-        shared_dim=128,
-        with_shared_embedding=True,
-        sn_eps=1e-6,
-        init_type='ortho',
-        act_cfg=dict(type='ReLU', inplace=True),
-        split_noise=True,
-        auto_sync_bn=False),
-    discriminator=dict(
-        type='BigGANDiscriminator',
-        input_scale=128,
-        num_classes=1000,
-        base_channels=96,
-        sn_eps=1e-6,
-        init_type='ortho',
-        act_cfg=dict(type='ReLU', inplace=True),
-        with_spectral_norm=True),
-    # continuous update discriminator for `discriminator_steps = 1` times
-    # continuous update generator for `generator_steps = 1` times
-    generator_steps=1,
-    discriminator_steps=1)
-
-optim_wrapper = dict(
-    constructor='MultiOptimWrapperConstructor',
-    generator=dict(
-        # generator perform `accumulative_counts = 8` times gradient accumulations before each update
-        accumulative_counts=8,
-        optimizer=dict(type='Adam', lr=0.0001, betas=(0.0, 0.999), eps=1e-6)),
-    discriminator=dict(
-        # discriminator perform `accumulative_counts = 8` times gradient accumulations before each update
-        accumulative_counts=8,
-        optimizer=dict(type='Adam', lr=0.0004, betas=(0.0, 0.999), eps=1e-6)))
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
diff --git a/docs/en/migration/overview.md b/docs/en/migration/overview.md
deleted file mode 100644
index 3ad56d7e6c..0000000000
--- a/docs/en/migration/overview.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Overview
-
-This section introduces the following contents in terms of migration from MMEditing 0.x
-
-- [Overview](#overview)
-  - [New dependencies](#new-dependencies)
-  - [Overall structures](#overall-structures)
-  - [Other config settings](#other-config-settings)
-
-## New dependencies
-
-MMagic 1.x depends on some new packages, you can prepare a new clean environment and install it again according to the [install tutorial](../get_started/install.md).
-
-## Overall structures
-
-We refactor overall structures in MMagic 1.x as follows.
-
-- The  `core` in the old versions of MMEdit is split into `engine`, `evaluation`, `structures`, and `visualization`
-- The `pipelines` of `datasets` in the old versions of MMEdit is refactored to `transforms`
-- The `models` in MMagic 1.x is refactored to six parts: `archs`, `base_models`, `data_preprocessors`, `editors`, `diffusion_schedulers`, and `losses`.
-
-## Other config settings
-
-We rename the config file to the new template: `{model_settings}_{module_setting}_{training_setting}_{datasets_info}`.
-
-More details of config are shown in [config guides](../user_guides/config.md).
diff --git a/docs/en/migration/runtime.md b/docs/en/migration/runtime.md
deleted file mode 100644
index d9b708e07c..0000000000
--- a/docs/en/migration/runtime.md
+++ /dev/null
@@ -1,79 +0,0 @@
-# Migration of Runtime Settings
-
-We update runtime settings in MMagic 1.x. Important modifications are as following.
-
-- The `checkpoint_config` is moved to `default_hooks.checkpoint` and the `log_config` is moved to `default_hooks.logger`. And we move many hooks settings from the script code to the `default_hooks` field in the runtime configuration.
-- The `resume_from` is removed. And we use `resume` to replace it.
-  - If resume=True and load_from is not None, resume training from the checkpoint in load_from.
-  - If resume=True and load_from is None, try to resume from the latest checkpoint in the work directory.
-  - If resume=False and load_from is not None, only load the checkpoint, not resume training.
-  - If resume=False and load_from is None, do not load nor resume.
-- The `dist_params` field is a sub field of `env_cfg` now. And there are some new configurations in the `env_cfg`.
-- The `workflow` related functionalities are removed.
-- New field `visualizer`: The visualizer is a new design. We use a visualizer instance in the runner to handle results & log visualization and save to different backends,  like Local, TensorBoard and Wandb.
-- New field `default_scope`: The start point to search module for all registries.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-checkpoint_config = dict(  # Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation.
-    interval=5000,  # The save interval is 5000 iterations
-    save_optimizer=True,  # Also save optimizers
-    by_epoch=False)  # Count by iterations
-log_config = dict(  # Config to register logger hook
-    interval=100,  # Interval to print the log
-    hooks=[
-        dict(type='TextLoggerHook', by_epoch=False),  # The logger used to record the training process
-        dict(type='TensorboardLoggerHook'),  # The Tensorboard logger is also supported
-    ])
-visual_config = None  # Visual config, we do not use it.
-# runtime settings
-dist_params = dict(backend='nccl')  # Parameters to setup distributed training, the port can also be set
-log_level = 'INFO'  # The level of logging
-load_from = None # load models as a pre-trained model from a given path. This will not resume training
-resume_from = None # Resume checkpoints from a given path, the training will be resumed from the iteration when the checkpoint's is saved
-workflow = [('train', 1)]  # Workflow for runner. [('train', 1)] means there is only one workflow and the workflow named 'train' is executed once. Keep this unchanged when training current matting models
-```
-
-</td>
-
-<td valign="top">
-
-```python
-default_hooks = dict(  # Used to build default hooks
-    checkpoint=dict(  # Config to set the checkpoint hook
-        type='CheckpointHook',
-        interval=5000,  # The save interval is 5000 iterations
-        save_optimizer=True,
-        by_epoch=False,  # Count by iterations
-        out_dir=save_dir,
-    ),
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100),  # Config to register logger hook
-    param_scheduler=dict(type='ParamSchedulerHook'),
-    sampler_seed=dict(type='DistSamplerSeedHook'),
-)
-default_scope = 'mmedit' # Used to set registries location
-env_cfg = dict(  # Parameters to setup distributed training, the port can also be set
-    cudnn_benchmark=False,
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=4),
-    dist_cfg=dict(backend='nccl'),
-)
-log_level = 'INFO'  # The level of logging
-log_processor = dict(type='LogProcessor', window_size=100, by_epoch=False)  # Used to build log processor
-load_from = None  # load models as a pre-trained model from a given path. This will not resume training.
-resume = False  # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
diff --git a/docs/en/migration/schedule.md b/docs/en/migration/schedule.md
deleted file mode 100644
index 0d90bbe9dc..0000000000
--- a/docs/en/migration/schedule.md
+++ /dev/null
@@ -1,50 +0,0 @@
-# Migration of Schedule Settings
-
-We update schedule settings in MMagic 1.x. Important modifications are as following.
-
-- Now we use `optim_wrapper` field to specify all configuration about the optimization process. And the `optimizer` is a sub field of `optim_wrapper` now.
-- The `lr_config` field is removed and we use new `param_scheduler` to replace it.
-- The `total_iters` field is moved to `train_cfg` as `max_iters`, `val_cfg` and `test_cfg`, which configure the loop in training, validation and test.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> Original </th>
-    <th> New </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-optimizers = dict(generator=dict(type='Adam', lr=1e-4, betas=(0.9, 0.999)))  # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
-total_iters = 300000 # Total training iters
-lr_config = dict( # Learning rate scheduler config used to register LrUpdater hook
-    policy='Step', by_epoch=False, step=[200000], gamma=0.5)  # The policy of scheduler
-```
-
-</td>
-
-<td valign="top">
-
-```python
-optim_wrapper = dict(
-    dict(
-        type='OptimWrapper',
-        optimizer=dict(type='Adam', lr=1e-4),
-    )
-)  # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch.
-param_scheduler = dict(  # Config of learning policy
-    type='MultiStepLR', by_epoch=False, milestones=[200000], gamma=0.5)  # The policy of scheduler
-train_cfg = dict(
-    type='IterBasedTrainLoop', max_iters=300000, val_interval=5000)  # Config of train loop type
-val_cfg = dict(type='ValLoop')  # The name of validation loop type
-test_cfg = dict(type='TestLoop')  # The name of test loop type
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-> More details of schedule settings are shown in [MMEngine Documents](https://github.com/open-mmlab/mmengine/blob/main/docs/en/migration/param_scheduler.md).
diff --git a/docs/en/migration/visualization.md b/docs/en/migration/visualization.md
deleted file mode 100644
index 325dabbab4..0000000000
--- a/docs/en/migration/visualization.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# Migration of Visualization
-
-In 0.x, MMEditing use `VisualizationHook` to visualize results in training process. In 1.x version, we unify the function of those hooks into `BasicVisualizationHook` / `VisualizationHook`. Additionally, follow the design of MMEngine, we implement `ConcatImageVisualizer` / `Visualizer` and a group of `VisBackend` to draw and save the visualization results.
-
-<table class="docutils">
-<thead>
-  <tr>
-    <th> 0.x version </th>
-    <th> 1.x Version </th>
-<tbody>
-<tr>
-<td valign="top">
-
-```python
-visual_config = dict(
-    type='VisualizationHook',
-    output_dir='visual',
-    interval=1000,
-    res_name_list=['gt_img', 'masked_img', 'fake_res', 'fake_img'],
-)
-```
-
-</td>
-
-<td valign="top">
-
-```python
-vis_backends = [dict(type='LocalVisBackend')]
-visualizer = dict(
-    type='ConcatImageVisualizer',
-    vis_backends=vis_backends,
-    fn_key='gt_path',
-    img_keys=['gt_img', 'input', 'pred_img'],
-    bgr2rgb=True)
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)]
-```
-
-</td>
-
-</tr>
-</thead>
-</table>
-
-To learn more about the visualization function, please refers to [this tutorial](../user_guides/visualization.md).
diff --git a/docs/en/switch_language.md b/docs/en/switch_language.md
deleted file mode 100644
index 1396b11ae8..0000000000
--- a/docs/en/switch_language.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# <a href='https://mmagic.readthedocs.io/en/latest/'>English</a>
-
-# <a href='https://mmagic.readthedocs.io/zh_CN/latest/'>简体中文</a>
diff --git a/docs/en/user_guides/config.md b/docs/en/user_guides/config.md
deleted file mode 100644
index c33c47b6a4..0000000000
--- a/docs/en/user_guides/config.md
+++ /dev/null
@@ -1,1014 +0,0 @@
-# Tutorial 1: Learn about Configs in MMagic
-
-We incorporate modular and inheritance design into our config system, which is convenient to conduct various experiments.
-If you wish to inspect the config file, you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
-
-You can learn about the usage of our config system according to the following tutorials.
-
-- [Tutorial 1: Learn about Configs in MMagic](#tutorial-1-learn-about-configs-in-mmagic)
-  - [Modify config through script arguments](#modify-config-through-script-arguments)
-  - [Config file structure](#config-file-structure)
-  - [Config name style](#config-name-style)
-  - [An example of EDSR](#an-example-of-edsr)
-    - [Model config](#model-config)
-    - [Data config](#data-config)
-      - [Data pipeline](#data-pipeline)
-      - [Dataloader](#dataloader)
-    - [Evaluation config](#evaluation-config)
-    - [Training and testing config](#training-and-testing-config)
-    - [Optimization config](#optimization-config)
-    - [Hook config](#hook-config)
-    - [Runtime config](#runtime-config)
-  - [An example of StyleGAN2](#an-example-of-stylegan2)
-    - [Model config](#model-config-1)
-    - [Dataset and evaluator config](#dataset-and-evaluator-config)
-    - [Training and testing config](#training-and-testing-config-1)
-    - [Optimization config](#optimization-config-1)
-    - [Hook config](#hook-config-1)
-    - [Runtime config](#runtime-config-1)
-  - [Other examples](#other-examples)
-    - [An example of config system for inpainting](#an-example-of-config-system-for-inpainting)
-    - [An example of config system for matting](#an-example-of-config-system-for-matting)
-    - [An example of config system for restoration](#an-example-of-config-system-for-restoration)
-
-## Modify config through script arguments
-
-When submitting jobs using `tools/train.py` or `tools/test.py`, you may specify `--cfg-options` to in-place modify the config.
-
-- Update config keys of dict chains.
-
-  The config options can be specified following the order of the dict keys in the original config.
-  For example, `--cfg-options test_cfg.use_ema=False` changes the default sampling model to the original generator,
-  and  `--cfg-options train_dataloader.batch_size=8` changes the batch size of train dataloader.
-
-- Update keys inside a list of configs.
-
-  Some config dicts are composed as a list in your config.
-  For example, the training pipeline `train_dataloader.dataset.pipeline` is normally a list
-  e.g. `[dict(type='LoadImageFromFile'), ...]`. If you want to change `'LoadImageFromFile'` to `'LoadImageFromWebcam'` in the pipeline,
-  you may specify `--cfg-options train_dataloader.dataset.pipeline.0.type=LoadImageFromWebcam`.
-  The training pipeline `train_pipeline` is normally a list
-  e.g. `[dict(type='LoadImageFromFile'), ...]`. If you want to change `'LoadImageFromFile'` to `'LoadMask'` in the pipeline,
-  you may specify `--cfg-options train_pipeline.0.type=LoadMask`.
-
-- Update values of list/tuples.
-
-  If the value to be updated is a list or a tuple. You can set `--cfg-options key="[a,b]"` or `--cfg-options key=a,b`. It also allows nested list/tuple values, e.g., `--cfg-options key="[(a,b),(c,d)]"`. Note that the quotation mark " is necessary to support list/tuple data types, and that **NO** white space is allowed inside the quotation marks in the specified value.
-
-## Config file structure
-
-There are 3 basic component types under `config/_base_`: datasets, models and default_runtime.
-Many methods could be easily constructed with one of each like AOT-GAN, EDVR, GLEAN, StyleGAN2, CycleGAN, SinGAN, etc.
-Configs consisting of components from `_base_` are called _primitive_.
-
-For all configs under the same folder, it is recommended to have only **one** _primitive_ config. All other configs should inherit from the _primitive_ config. In this way, the maximum of inheritance level is 3.
-
-For easy understanding, we recommend contributors to inherit from existing methods.
-For example, if some modification is made base on BasicVSR,
-user may first inherit the basic BasicVSR structure by specifying `_base_ = ../basicvsr/basicvsr_reds4.py`,
-then modify the necessary fields in the config files.
-If some modification is made base on StyleGAN2,
-user may first inherit the basic StyleGAN2 structure by specifying `_base_ = ../styleganv2/stylegan2_c2_ffhq_256_b4x8_800k.py`,
-then modify the necessary fields in the config files.
-
-If you are building an entirely new method that does not share the structure with any of the existing methods,
-you may create a folder `xxx` under `configs`,
-
-Please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/config.md) for detailed documentation.
-
-## Config name style
-
-```
-{model}_[module setting]_{training schedule}_{dataset}
-```
-
-`{xxx}` is required field and `[yyy]` is optional.
-
-- `{model}`: model type like `stylegan`, `dcgan`, `basicvsr`, `dim`, etc.
-  Settings referred in the original paper are included in this field as well (e.g., `Stylegan2-config-f`, `edvrm` of `edvrm_8xb4-600k_reds`.)
-- `[module setting]`: specific setting for some modules, including Encoder, Decoder, Generator, Discriminator, Normalization, loss, Activation, etc. E.g. `c64n7` of `basicvsr-pp_c64n7_8xb1-600k_reds4`, learning rate `Glr4e-4_Dlr1e-4` for dcgan, `gamma32.8` for stylegan3, `woReLUInplace` in sagan. In this section, information from different submodules (e.g., generator and discriminator) are connected with `_`.
-- `{training_scheduler}`: specific setting for training, including batch_size, schedule, etc. For example, learning rate (e.g., `lr1e-3`), number of gpu and batch size is used (e.g., `8xb32`), and total iterations (e.g., `160kiter`) or number of images shown in the discriminator (e.g., `12Mimgs`).
-- `{dataset}`: dataset name and data size info like `celeba-256x256` of `deepfillv1_4xb4_celeba-256x256`, `reds4` of `basicvsr_2xb4_reds4`, `ffhq`, `lsun-car`, `celeba-hq`.
-
-## An example of EDSR
-
-To help the users have a basic idea of a complete config,
-we make a brief comments on the [config of the EDSR model](https://github.com/open-mmlab/mmagic/blob/main/configs/edsr/edsr_x2c64b16_g1_300k_div2k.py) we implemented as the following.
-For more detailed usage and the corresponding alternative for each modules,
-please refer to the API documentation and the [tutorial in MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/config.md).
-
-### Model config
-
-In MMagic's config, we use model fields to set up a model.
-
-```python
-model = dict(
-    type='BaseEditModel',  # Name of the model
-    generator=dict(  # Config of the generator
-        type='EDSRNet',  # Type of the generator
-        in_channels=3,  # Channel number of inputs
-        out_channels=3,  # Channel number of outputs
-        mid_channels=64,  # Channel number of intermediate features
-        num_blocks=16,  # Block number in the trunk network
-        upscale_factor=scale, # Upsampling factor
-        res_scale=1,  # Used to scale the residual in residual block
-        rgb_mean=(0.4488, 0.4371, 0.4040),  # Image mean in RGB orders
-        rgb_std=(1.0, 1.0, 1.0)),  # Image std in RGB orders
-    pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean')  # Config for pixel loss
-    train_cfg=dict(),  # Config of training model.
-    test_cfg=dict(),  # Config of testing model.
-    data_preprocessor=dict(  # The Config to build data preprocessor
-        type='DataPreprocessor', mean=[0., 0., 0.], std=[255., 255.,
-                                                             255.]))
-```
-
-### Data config
-
-[Dataloaders](https://pytorch.org/docs/stable/data.html?highlight=data%20loader#torch.utils.data.DataLoader) are required for the training, validation, and testing of the [runner](https://mmengine.readthedocs.io/en/latest/tutorials/runner.html).
-Dataset and data pipeline need to be set to build the dataloader. Due to the complexity of this part, we use intermediate variables to simplify the writing of dataloader configs.
-
-#### Data pipeline
-
-```python
-train_pipeline = [  # Training data processing pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find the corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find the corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='SetValues', dictionary=dict(scale=scale)),  # Set value to destination keys
-    dict(type='PairedRandomCrop', gt_patch_size=96),  # Paired random crop
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='horizontal'),  # Flip direction
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='vertical'),  # Flip direction
-    dict(type='RandomTransposeHW',  # Random transpose h and w for images
-        keys=['lq', 'gt'],  # Images to be transposed
-        transpose_ratio=0.5  # Transpose ratio
-        ),
-    dict(type='PackInputs')  # The config of collecting data from the current pipeline
-]
-test_pipeline = [  # Test pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='PackInputs')  # The config of collecting data from the current pipeline
-]
-```
-
-#### Dataloader
-
-```python
-dataset_type = 'BasicImageDataset'  # The type of dataset
-data_root = 'data'  # Root path of data
-train_dataloader = dict(
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    sampler=dict(type='InfiniteSampler', shuffle=True),  # The type of data sampler
-    dataset=dict(  # Train dataset config
-        type=dataset_type,  # Type of dataset
-        ann_file='meta_info_DIV2K800sub_GT.txt',  # Path of annotation file
-        metainfo=dict(dataset_type='div2k', task_name='sisr'),
-        data_root=data_root + '/DIV2K',  # Root path of data
-        data_prefix=dict(  # Prefix of image path
-            img='DIV2K_train_LR_bicubic/X2_sub', gt='DIV2K_train_HR_sub'),
-        filename_tmpl=dict(img='{}', gt='{}'),  # Filename template
-        pipeline=train_pipeline))
-val_dataloader = dict(
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    drop_last=False,  # Whether drop the last incomplete batch
-    sampler=dict(type='DefaultSampler', shuffle=False),  # The type of data sampler
-    dataset=dict(  # Validation dataset config
-        type=dataset_type,  # Type of dataset
-        metainfo=dict(dataset_type='set5', task_name='sisr'),
-        data_root=data_root + '/Set5',  # Root path of data
-        data_prefix=dict(img='LRbicx2', gt='GTmod12'),  # Prefix of image path
-        pipeline=test_pipeline))
-test_dataloader = val_dataloader
-```
-
-### Evaluation config
-
-[Evaluators](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html) are used to compute the metrics of the trained model on the validation and testing datasets.
-The config of evaluators consists of one or a list of metric configs:
-
-```python
-val_evaluator = [
-    dict(type='MAE'),  # The name of metrics to evaluate
-    dict(type='PSNR', crop_border=scale),  # The name of metrics to evaluate
-    dict(type='SSIM', crop_border=scale),  # The name of metrics to evaluate
-]
-test_evaluator = val_evaluator # The config for testing evaluator
-```
-
-### Training and testing config
-
-MMEngine's runner uses Loop to control the training, validation, and testing processes.
-Users can set the maximum training iteration and validation intervals with these fields.
-
-```python
-train_cfg = dict(
-    type='IterBasedTrainLoop',  # The name of train loop type
-    max_iters=300000,  # The number of total iterations
-    val_interval=5000,  # The number of validation interval iterations
-)
-val_cfg = dict(type='ValLoop')  # The name of validation loop type
-test_cfg = dict(type='TestLoop')  # The name of test loop type
-```
-
-### Optimization config
-
-`optim_wrapper` is the field to configure optimization related settings.
-The optimizer wrapper not only provides the functions of the optimizer, but also supports functions such as gradient clipping, mixed precision training, etc. Find more in [optimizer wrapper tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/optim_wrapper.html).
-
-```python
-optim_wrapper = dict(
-    dict(
-        type='OptimWrapper',
-        optimizer=dict(type='Adam', lr=0.00001),
-    )
-)  # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch.
-```
-
-`param_scheduler` is a field that configures methods of adjusting optimization hyper-parameters such as learning rate and momentum.
-Users can combine multiple schedulers to create a desired parameter adjustment strategy.
-Find more in [parameter scheduler tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html).
-
-```python
-param_scheduler = dict(  # Config of learning policy
-    type='MultiStepLR', by_epoch=False, milestones=[200000], gamma=0.5)
-```
-
-### Hook config
-
-Users can attach hooks to training, validation, and testing loops to insert some operations during running. There are two different hook fields, one is `default_hooks` and the other is `custom_hooks`.
-
-`default_hooks` is a dict of hook configs. `default_hooks` are the hooks must required at runtime. They have default priority which should not be modified. If not set, runner will use the default values. To disable a default hook, users can set its config to `None`.
-
-```python
-default_hooks = dict(  # Used to build default hooks
-    checkpoint=dict(  # Config to set the checkpoint hook
-        type='CheckpointHook',
-        interval=5000,  # The save interval is 5000 iterations
-        save_optimizer=True,
-        by_epoch=False,  # Count by iterations
-        out_dir=save_dir,
-    ),
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100),  # Config to register logger hook
-    param_scheduler=dict(type='ParamSchedulerHook'),
-    sampler_seed=dict(type='DistSamplerSeedHook'),
-)
-```
-
-`custom_hooks` is a list of hook configs. Users can develop there own hooks and insert them in this field.
-
-```python
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)] # Config of visualization hook
-```
-
-### Runtime config
-
-```python
-default_scope = 'mmagic' # Used to set registries location
-env_cfg = dict(  # Parameters to setup distributed training, the port can also be set
-    cudnn_benchmark=False,
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=4),
-    dist_cfg=dict(backend='nccl'),
-)
-log_level = 'INFO'  # The level of logging
-log_processor = dict(type='LogProcessor', window_size=100, by_epoch=False)  # Used to build log processor
-load_from = None  # load models as a pre-trained model from a given path. This will not resume training.
-resume = False  # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
-```
-
-## An example of StyleGAN2
-
-Taking [Stylegan2 at 1024x1024 scale](https://github.com/open-mmlab/mmagic/blob/main/configs//styleganv2/stylegan2_c2_8xb4-fp16-global-800kiters_quicktest-ffhq-256x256.py) as an example,
-we introduce each field in the config according to different function modules.
-
-### Model config
-
-In addition to neural network components such as generator, discriminator etc, it also requires `data_preprocessor`, `loss_config`, and some of them contains `ema_config`.
-`data_preprocessor` is responsible for processing a batch of data output by dataloader.
-`loss_config` is responsible for weight of loss terms.
-`ema_config` is responsible for exponential moving average (EMA) operation for generator.
-
-```python
-model = dict(
-    type='StyleGAN2',  # The name of the model
-    data_preprocessor=dict(type='DataPreprocessor'),  # The config of data preprocessor, usually includs image normalization and padding
-    generator=dict(  # The config for generator
-        type='StyleGANv2Generator',  # The name of the generator
-        out_size=1024,  # The output resolution of the generator
-        style_channels=512),  # The number of style channels of the generator
-    discriminator=dict(  # The config for discriminator
-        type='StyleGAN2Discriminator',  # The name of the discriminator
-        in_size=1024),  # The input resolution of the discriminator
-    ema_config=dict(  # The config for EMA
-        type='ExponentialMovingAverage',  # Specific the type of Average model
-        interval=1,  # The interval of EMA operation
-        momentum=0.9977843871238888),  # The momentum of EMA operation
-    loss_config=dict(  # The config for loss terms
-        r1_loss_weight=80.0,  # The weight for r1 gradient penalty
-        r1_interval=16,  # The interval of r1 gradient penalty
-        norm_mode='HWC',  # The normalization mode for r1 gradient penalty
-        g_reg_interval=4,  # The interval for generator's regularization
-        g_reg_weight=8.0,  # The weight for generator's regularization
-        pl_batch_shrink=2))  # The factor of shrinking the batch size in path length regularization
-```
-
-### Dataset and evaluator config
-
-[Dataloaders](https://pytorch.org/docs/stable/data.html?highlight=data%20loader#torch.utils.data.DataLoader) are required for the training, validation, and testing of the [runner](https://mmengine.readthedocs.io/en/latest/tutorials/runner.html).
-Dataset and data pipeline need to be set to build the dataloader. Due to the complexity of this part, we use intermediate variables to simplify the writing of dataloader configs.
-
-```python
-dataset_type = 'BasicImageDataset'  # Dataset type, this will be used to define the dataset
-data_root = './data/ffhq/'  # Root path of data
-
-train_pipeline = [  # Training data process pipeline
-    dict(type='LoadImageFromFile', key='img'),  # First pipeline to load images from file path
-    dict(type='Flip', keys=['img'], direction='horizontal'),  # Argumentation pipeline that flip the images
-    dict(type='PackInputs', keys=['img'])  # The last pipeline that formats the annotation data (if have) and decides which keys in the data should be packed into data_samples
-]
-val_pipeline = [
-    dict(type='LoadImageFromFile', key='img'),  # First pipeline to load images from file path
-    dict(type='PackInputs', keys=['img'])  # The last pipeline that formats the annotation data (if have) and decides which keys in the data should be packed into data_samples
-]
-train_dataloader = dict(  # The config of train dataloader
-    batch_size=4,  # Batch size of a single GPU
-    num_workers=8,  # Worker to pre-fetch data for each single GPU
-    persistent_workers=True,  # If ``True``, the dataloader will not shutdown the worker processes after an epoch end, which can accelerate training speed.
-    sampler=dict(  # The config of training data sampler
-        type='InfiniteSampler',  # InfiniteSampler for iteratiion-based training. Refers to https://github.com/open-mmlab/mmengine/blob/fe0eb0a5bbc8bf816d5649bfdd34908c258eb245/mmengine/dataset/sampler.py#L107
-        shuffle=True),  # Whether randomly shuffle the training data
-    dataset=dict(  # The config of the training dataset
-        type=dataset_type,
-        data_root=data_root,
-        pipeline=train_pipeline))
-val_dataloader = dict(  # The config of validation dataloader
-    batch_size=4,  # Batch size of a single GPU
-    num_workers=8,  # Worker to pre-fetch data for each single GPU
-    dataset=dict(  # The config of the validation dataset
-        type=dataset_type,
-        data_root=data_root,
-        pipeline=val_pipeline),
-    sampler=dict(  # The config of validatioin data sampler
-        type='DefaultSampler',  # DefaultSampler which supports both distributed and non-distributed training. Refer to https://github.com/open-mmlab/mmengine/blob/fe0eb0a5bbc8bf816d5649bfdd34908c258eb245/mmengine/dataset/sampler.py#L14
-        shuffle=False),  # Whether randomly shuffle the validation data
-    persistent_workers=True)
-test_dataloader = val_dataloader  # The config of the testing dataloader
-```
-
-[Evaluators](https://mmengine.readthedocs.io/en/latest/tutorials/evaluation.html) are used to compute the metrics of the trained model on the validation and testing datasets.
-The config of evaluators consists of one or a list of metric configs:
-
-```python
-val_evaluator = dict(  # The config for validation evaluator
-    type='Evaluator',  # The type of evaluation
-    metrics=[  # The config for metrics
-        dict(
-            type='FrechetInceptionDistance',
-            prefix='FID-Full-50k',
-            fake_nums=50000,
-            inception_style='StyleGAN',
-            sample_model='ema'),
-        dict(type='PrecisionAndRecall', fake_nums=50000, prefix='PR-50K'),
-        dict(type='PerceptualPathLength', fake_nums=50000, prefix='ppl-w')
-    ])
-test_evaluator = val_evaluator  # The config for testing evaluator
-```
-
-### Training and testing config
-
-MMEngine's runner uses Loop to control the training, validation, and testing processes.
-Users can set the maximum training iteration and validation intervals with these fields.
-
-```python
-train_cfg = dict(  # The config for training
-    by_epoch=False,  # Set `by_epoch` as False to use iteration-based training
-    val_begin=1,  # Which iteration to start the validation
-    val_interval=10000,  # Validation intervals
-    max_iters=800002)  # Maximum training iterations
-val_cfg = dict(type='MultiValLoop')  # The validation loop type
-test_cfg = dict(type='MultiTestLoop')  # The testing loop type
-```
-
-### Optimization config
-
-`optim_wrapper` is the field to configure optimization related settings.
-The optimizer wrapper not only provides the functions of the optimizer, but also supports functions such as gradient clipping, mixed precision training, etc. Find more in [optimizer wrapper tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/optim_wrapper.html).
-
-```python
-optim_wrapper = dict(
-    constructor='MultiOptimWrapperConstructor',
-    generator=dict(
-        optimizer=dict(type='Adam', lr=0.0016, betas=(0, 0.9919919678228657))),
-    discriminator=dict(
-        optimizer=dict(
-            type='Adam',
-            lr=0.0018823529411764706,
-            betas=(0, 0.9905854573074332))))
-```
-
-`param_scheduler` is a field that configures methods of adjusting optimization hyperparameters such as learning rate and momentum.
-Users can combine multiple schedulers to create a desired parameter adjustment strategy.
-Find more in [parameter scheduler tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html).
-Since StyleGAN2 do not use parameter scheduler, we use config in [CycleGAN](https://github.com/open-mmlab/mmagic/blob/main/configs/cyclegan/cyclegan_lsgan-id0-resnet-in_1xb1-250kiters_summer2winter.py) as an example:
-
-```python
-# parameter scheduler in CycleGAN config
-param_scheduler = dict(
-    type='LinearLrInterval',  # The type of scheduler
-    interval=400,  # The interval to update the learning rate
-    by_epoch=False,  # The scheduler is called by iteration
-    start_factor=0.0002,  # The number we multiply parameter value in the first iteration
-    end_factor=0,  # The number we multiply parameter value at the end of linear changing process.
-    begin=40000,  # The start iteration of the scheduler
-    end=80000)  # The end iteration of the scheduler
-```
-
-### Hook config
-
-Users can attach hooks to training, validation, and testing loops to insert some operations during running. There are two different hook fields, one is `default_hooks` and the other is `custom_hooks`.
-
-`default_hooks` is a dict of hook configs. `default_hooks` are the hooks must required at runtime. They have default priority which should not be modified. If not set, runner will use the default values. To disable a default hook, users can set its config to `None`.
-
-```python
-default_hooks = dict(
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100, log_metric_by_epoch=False),
-    checkpoint=dict(
-        type='CheckpointHook',
-        interval=10000,
-        by_epoch=False,
-        less_keys=['FID-Full-50k/fid'],
-        greater_keys=['IS-50k/is'],
-        save_optimizer=True,
-        save_best='FID-Full-50k/fid'))
-```
-
-`custom_hooks` is a list of hook configs. Users can develop there own hooks and insert them in this field.
-
-```python
-custom_hooks = [
-    dict(
-        type='VisualizationHook',
-        interval=5000,
-        fixed_input=True,
-        vis_kwargs_list=dict(type='GAN', name='fake_img'))
-]
-```
-
-### Runtime config
-
-```python
-default_scope = 'mmagic'  # The default registry scope to find modules. Refer to https://mmengine.readthedocs.io/en/latest/advanced_tutorials/registry.html
-
-# config for environment
-env_cfg = dict(
-    cudnn_benchmark=True,  # whether to enable cudnn benchmark.
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),  # set multi process parameters.
-    dist_cfg=dict(backend='nccl'),  # set distributed parameters.
-)
-
-log_level = 'INFO'  # The level of logging
-log_processor = dict(
-    type='LogProcessor',  # log processor to process runtime logs
-    by_epoch=False)  # print log by iteration
-load_from = None  # load model checkpoint as a pre-trained model for a given path
-resume = False  # Whether to resume from the checkpoint define in `load_from`. If `load_from` is `None`, it will resume the latest checkpoint in `work_dir`
-```
-
-## Other examples
-
-### An example of config system for inpainting
-
-To help the users have a basic idea of a complete config and the modules in a inpainting system,
-we make brief comments on the config of Global&Local as the following.
-For more detailed usage and the corresponding alternative for each modules, please refer to the API documentation.
-
-```python
-model = dict(
-    type='GLInpaintor', # The name of inpaintor
-    data_preprocessor=dict(
-        type='DataPreprocessor', # The name of data preprocessor
-        mean=[127.5], # Mean value used in data normalization
-        std=[127.5], # Std value used in data normalization
-    ),
-    encdec=dict(
-        type='GLEncoderDecoder', # The name of encoder-decoder
-        encoder=dict(type='GLEncoder', norm_cfg=dict(type='SyncBN')), # The config of encoder
-        decoder=dict(type='GLDecoder', norm_cfg=dict(type='SyncBN')), # The config of decoder
-        dilation_neck=dict(
-            type='GLDilationNeck', norm_cfg=dict(type='SyncBN'))), # The config of dilation neck
-    disc=dict(
-        type='GLDiscs', # The name of discriminator
-        global_disc_cfg=dict(
-            in_channels=3, # The input channel of discriminator
-            max_channels=512, # The maximum middle channel in discriminator
-            fc_in_channels=512 * 4 * 4, # The input channel of last fc layer
-            fc_out_channels=1024, # The output channel of last fc channel
-            num_convs=6, # The number of convs used in discriminator
-            norm_cfg=dict(type='SyncBN') # The config of norm layer
-        ),
-        local_disc_cfg=dict(
-            in_channels=3, # The input channel of discriminator
-            max_channels=512, # The maximum middle channel in discriminator
-            fc_in_channels=512 * 4 * 4, # The input channel of last fc layer
-            fc_out_channels=1024, # The output channel of last fc channel
-            num_convs=5, # The number of convs used in discriminator
-            norm_cfg=dict(type='SyncBN') # The config of norm layer
-        ),
-    ),
-    loss_gan=dict(
-        type='GANLoss', # The name of GAN loss
-        gan_type='vanilla', # The type of GAN loss
-        loss_weight=0.001 # The weight of GAN loss
-    ),
-    loss_l1_hole=dict(
-        type='L1Loss', # The type of l1 loss
-        loss_weight=1.0 # The weight of l1 loss
-    ))
-
-train_cfg = dict(
-    type='IterBasedTrainLoop',# The name of train loop type
-    max_iters=500002, # The number of total iterations
-    val_interval=50000, # The number of validation interval iterations
-)
-val_cfg = dict(type='ValLoop') # The name of validation loop type
-test_cfg = dict(type='TestLoop') # The name of test loop type
-
-val_evaluator = [
-    dict(type='MAE', mask_key='mask', scaling=100), # The name of metrics to evaluate
-    dict(type='PSNR'), # The name of metrics to evaluate
-    dict(type='SSIM'), # The name of metrics to evaluate
-]
-test_evaluator = val_evaluator
-
-input_shape = (256, 256) # The shape of input image
-
-train_pipeline = [
-    dict(type='LoadImageFromFile', key='gt'), # The config of loading image
-    dict(
-        type='LoadMask', # The type of loading mask pipeline
-        mask_mode='bbox', # The type of mask
-        mask_config=dict(
-            max_bbox_shape=(128, 128), # The shape of bbox
-            max_bbox_delta=40, # The changing delta of bbox height and width
-            min_margin=20,  # The minimum margin from bbox to the image border
-            img_shape=input_shape)),  # The input image shape
-    dict(
-        type='Crop', # The type of crop pipeline
-        keys=['gt'],  # The keys of images to be cropped
-        crop_size=(384, 384),  # The size of cropped patch
-        random_crop=True,  # Whether to use random crop
-    ),
-    dict(
-        type='Resize',  # The type of resizing pipeline
-        keys=['gt'],  # They keys of images to be resized
-        scale=input_shape,  # The scale of resizing function
-        keep_ratio=False,  # Whether to keep ratio during resizing
-    ),
-    dict(
-        type='Normalize',  # The type of normalizing pipeline
-        keys=['gt_img'],  # The keys of images to be normed
-        mean=[127.5] * 3,  # Mean value used in normalization
-        std=[127.5] * 3,  # Std value used in normalization
-        to_rgb=False),  # Whether to transfer image channels to rgb
-    dict(type='GetMaskedImage'), # The config of getting masked image pipeline
-    dict(type='PackInputs'), # The config of collecting data from the current pipeline
-]
-
-test_pipeline = train_pipeline  # Constructing testing/validation pipeline
-
-dataset_type = 'BasicImageDataset' # The type of dataset
-data_root = 'data/places'  # Root path of data
-
-train_dataloader = dict(
-    batch_size=12, # Batch size of a single GPU
-    num_workers=4, # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False, # Whether maintain the workers Dataset instances alive
-    sampler=dict(type='InfiniteSampler', shuffle=False), # The type of data sampler
-    dataset=dict(  # Train dataset config
-        type=dataset_type, # Type of dataset
-        data_root=data_root, # Root path of data
-        data_prefix=dict(gt='data_large'), # Prefix of image path
-        ann_file='meta/places365_train_challenge.txt', # Path of annotation file
-        test_mode=False,
-        pipeline=train_pipeline,
-    ))
-
-val_dataloader = dict(
-    batch_size=1, # Batch size of a single GPU
-    num_workers=4, # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False, # Whether maintain the workers Dataset instances alive
-    drop_last=False, # Whether drop the last incomplete batch
-    sampler=dict(type='DefaultSampler', shuffle=False), # The type of data sampler
-    dataset=dict( # Validation dataset config
-        type=dataset_type, # Type of dataset
-        data_root=data_root, # Root path of data
-        data_prefix=dict(gt='val_large'), # Prefix of image path
-        ann_file='meta/places365_val.txt', # Path of annotation file
-        test_mode=True,
-        pipeline=test_pipeline,
-    ))
-
-test_dataloader = val_dataloader
-
-model_wrapper_cfg = dict(type='MMSeparateDistributedDataParallel') # The name of model wrapper
-
-optim_wrapper = dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch
-    constructor='MultiOptimWrapperConstructor',
-    generator=dict(
-        type='OptimWrapper', optimizer=dict(type='Adam', lr=0.0004)),
-    disc=dict(type='OptimWrapper', optimizer=dict(type='Adam', lr=0.0004)))
-
-default_scope = 'mmagic' # Used to set registries location
-save_dir = './work_dirs' # Directory to save the model checkpoints and logs for the current experiments
-exp_name = 'gl_places'  # The experiment name
-
-default_hooks = dict( # Used to build default hooks
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100), # Config to register logger hook
-    param_scheduler=dict(type='ParamSchedulerHook'),
-    checkpoint=dict( # Config to set the checkpoint hook
-        type='CheckpointHook',
-        interval=50000,
-        by_epoch=False,
-        out_dir=save_dir),
-    sampler_seed=dict(type='DistSamplerSeedHook'),
-)
-
-env_cfg = dict( # Parameters to setup distributed training, the port can also be set
-    cudnn_benchmark=False,
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
-    dist_cfg=dict(backend='nccl'),
-)
-
-vis_backends = [dict(type='LocalVisBackend')] # The name of visualization backend
-visualizer = dict( # Config used to build visualizer
-    type='ConcatImageVisualizer',
-    vis_backends=vis_backends,
-    fn_key='gt_path',
-    img_keys=['gt_img', 'input', 'pred_img'],
-    bgr2rgb=True)
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)] # Used to build custom hooks
-
-log_level = 'INFO' # The level of logging
-log_processor = dict(type='LogProcessor', by_epoch=False) # Used to build log processor
-
-load_from = None # load models as a pre-trained model from a given path. This will not resume training.
-resume = False # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
-
-find_unused_parameters = False  # Whether to set find unused parameters in ddp
-```
-
-### An example of config system for matting
-
-To help the users have a basic idea of a complete config, we make a brief comments on the config of the original DIM model we implemented as the following. For more detailed usage and the corresponding alternative for each modules, please refer to the API documentation.
-
-```python
-# model settings
-model = dict(
-    type='DIM',  # The name of model (we call mattor).
-    data_preprocessor=dict(  # The Config to build data preprocessor
-        type='DataPreprocessor',
-        mean=[123.675, 116.28, 103.53],
-        std=[58.395, 57.12, 57.375],
-        bgr_to_rgb=True,
-        proc_inputs='normalize',
-        proc_trimap='rescale_to_zero_one',
-        proc_gt='rescale_to_zero_one',
-    ),
-    backbone=dict(  # The config of the backbone.
-        type='SimpleEncoderDecoder',  # The type of the backbone.
-        encoder=dict(  # The config of the encoder.
-            type='VGG16'),  # The type of the encoder.
-        decoder=dict(  # The config of the decoder.
-            type='PlainDecoder')),  # The type of the decoder.
-    pretrained='./weights/vgg_state_dict.pth',  # The pretrained weight of the encoder to be loaded.
-    loss_alpha=dict(  # The config of the alpha loss.
-        type='CharbonnierLoss',  # The type of the loss for predicted alpha matte.
-        loss_weight=0.5),  # The weight of the alpha loss.
-    loss_comp=dict(  # The config of the composition loss.
-        type='CharbonnierCompLoss',  # The type of the composition loss.
-        loss_weight=0.5), # The weight of the composition loss.
-    train_cfg=dict(  # Config of training DIM model.
-        train_backbone=True,  # In DIM stage1, backbone is trained.
-        train_refiner=False),  # In DIM stage1, refiner is not trained.
-    test_cfg=dict(  # Config of testing DIM model.
-        refine=False,  # Whether use refiner output as output, in stage1, we don't use it.
-        resize_method='pad',
-        resize_mode='reflect',
-        size_divisor=32,
-    ),
-)
-
-# data settings
-dataset_type = 'AdobeComp1kDataset'  # Dataset type, this will be used to define the dataset.
-data_root = 'data/adobe_composition-1k'  # Root path of data.
-
-train_pipeline = [  # Training data processing pipeline.
-    dict(
-        type='LoadImageFromFile',  # Load alpha matte from file.
-        key='alpha',  # Key of alpha matte in annotation file. The pipeline will read alpha matte from path `alpha_path`.
-        color_type='grayscale'),  # Load as grayscale image which has shape (height, width).
-    dict(
-        type='LoadImageFromFile',  # Load image from file.
-        key='fg'),  # Key of image to load. The pipeline will read fg from path `fg_path`.
-    dict(
-        type='LoadImageFromFile',  # Load image from file.
-        key='bg'),  # Key of image to load. The pipeline will read bg from path `bg_path`.
-    dict(
-        type='LoadImageFromFile',  # Load image from file.
-        key='merged'),  # Key of image to load. The pipeline will read merged from path `merged_path`.
-    dict(
-        type='CropAroundUnknown',  # Crop images around unknown area (semi-transparent area).
-        keys=['alpha', 'merged', 'fg', 'bg'],  # Images to crop.
-        crop_sizes=[320, 480, 640]),  # Candidate crop size.
-    dict(
-        type='Flip',  # Augmentation pipeline that flips the images.
-        keys=['alpha', 'merged', 'fg', 'bg']),  # Images to be flipped.
-    dict(
-        type='Resize',  # Augmentation pipeline that resizes the images.
-        keys=['alpha', 'merged', 'fg', 'bg'],  # Images to be resized.
-        scale=(320, 320),  # Target size.
-        keep_ratio=False),  # Whether to keep the ratio between height and width.
-    dict(
-        type='GenerateTrimap',  # Generate trimap from alpha matte.
-        kernel_size=(1, 30)),  # Kernel size range of the erode/dilate kernel.
-    dict(type='PackInputs'),  # The config of collecting data from the current pipeline
-]
-test_pipeline = [
-    dict(
-        type='LoadImageFromFile',  # Load alpha matte.
-        key='alpha',  # Key of alpha matte in annotation file. The pipeline will read alpha matte from path `alpha_path`.
-        color_type='grayscale',
-        save_original_img=True),
-    dict(
-        type='LoadImageFromFile',  # Load image from file
-        key='trimap',  # Key of image to load. The pipeline will read trimap from path `trimap_path`.
-        color_type='grayscale',  # Load as grayscale image which has shape (height, width).
-        save_original_img=True),  # Save a copy of trimap for calculating metrics. It will be saved with key `ori_trimap`
-    dict(
-        type='LoadImageFromFile',  # Load image from file
-        key='merged'),  # Key of image to load. The pipeline will read merged from path `merged_path`.
-    dict(type='PackInputs'),  # The config of collecting data from the current pipeline
-]
-
-train_dataloader = dict(
-    batch_size=1,  # Batch size of a single GPU
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    sampler=dict(type='InfiniteSampler', shuffle=True),  # The type of data sampler
-    dataset=dict(  # Train dataset config
-        type=dataset_type,  # Type of dataset
-        data_root=data_root,  # Root path of data
-        ann_file='training_list.json',  # Path of annotation file
-        test_mode=False,
-        pipeline=train_pipeline,
-    ))
-
-val_dataloader = dict(
-    batch_size=1,  # Batch size of a single GPU
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    drop_last=False,  # Whether drop the last incomplete batch
-    sampler=dict(type='DefaultSampler', shuffle=False),  # The type of data sampler
-    dataset=dict(  # Validation dataset config
-        type=dataset_type,  # Type of dataset
-        data_root=data_root,  # Root path of data
-        ann_file='test_list.json',  # Path of annotation file
-        test_mode=True,
-        pipeline=test_pipeline,
-    ))
-
-test_dataloader = val_dataloader
-
-val_evaluator = [
-    dict(type='SAD'),  # The name of metrics to evaluate
-    dict(type='MattingMSE'),  # The name of metrics to evaluate
-    dict(type='GradientError'),  # The name of metrics to evaluate
-    dict(type='ConnectivityError'),  # The name of metrics to evaluate
-]
-test_evaluator = val_evaluator
-
-train_cfg = dict(
-    type='IterBasedTrainLoop',  # The name of train loop type
-    max_iters=1_000_000,  # The number of total iterations
-    val_interval=40000,  # The number of validation interval iterations
-)
-val_cfg = dict(type='ValLoop')  # The name of validation loop type
-test_cfg = dict(type='TestLoop')  # The name of test loop type
-
-# optimizer
-optim_wrapper = dict(
-    dict(
-        type='OptimWrapper',
-        optimizer=dict(type='Adam', lr=0.00001),
-    )
-)  # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch.
-
-default_scope = 'mmagic'  # Used to set registries location
-save_dir = './work_dirs'  # Directory to save the model checkpoints and logs for the current experiments.
-
-default_hooks = dict(  # Used to build default hooks
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100),  # Config to register logger hook
-    param_scheduler=dict(type='ParamSchedulerHook'),
-    checkpoint=dict(  # Config to set the checkpoint hook
-        type='CheckpointHook',
-        interval=40000,  # The save interval is 40000 iterations.
-        by_epoch=False,  # Count by iterations.
-        out_dir=save_dir),
-    sampler_seed=dict(type='DistSamplerSeedHook'),
-)
-
-env_cfg = dict(  # Parameters to setup distributed training, the port can also be set
-    cudnn_benchmark=False,
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=4),
-    dist_cfg=dict(backend='nccl'),
-)
-
-log_level = 'INFO'  # The level of logging
-log_processor = dict(type='LogProcessor', by_epoch=False)  # Used to build log processor
-
-load_from = None  # load models as a pre-trained model from a given path. This will not resume training.
-resume = False  # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
-```
-
-### An example of config system for restoration
-
-To help the users have a basic idea of a complete config, we make a brief comments on the config of the EDSR model we implemented as the following. For more detailed usage and the corresponding alternative for each modules, please refer to the API documentation.
-
-```python
-exp_name = 'edsr_x2c64b16_1x16_300k_div2k'  # The experiment name
-work_dir = f'./work_dirs/{experiment_name}'
-save_dir = './work_dirs/'
-
-load_from = None  # based on pre-trained x2 model
-
-scale = 2  # Scale factor for upsampling
-# model settings
-model = dict(
-    type='BaseEditModel',  # Name of the model
-    generator=dict(  # Config of the generator
-        type='EDSRNet',  # Type of the generator
-        in_channels=3,  # Channel number of inputs
-        out_channels=3,  # Channel number of outputs
-        mid_channels=64,  # Channel number of intermediate features
-        num_blocks=16,  # Block number in the trunk network
-        upscale_factor=scale, # Upsampling factor
-        res_scale=1,  # Used to scale the residual in residual block
-        rgb_mean=(0.4488, 0.4371, 0.4040),  # Image mean in RGB orders
-        rgb_std=(1.0, 1.0, 1.0)),  # Image std in RGB orders
-    pixel_loss=dict(type='L1Loss', loss_weight=1.0, reduction='mean')  # Config for pixel loss
-    train_cfg=dict(),  # Config of training model.
-    test_cfg=dict(),  # Config of testing model.
-    data_preprocessor=dict(  # The Config to build data preprocessor
-        type='DataPreprocessor', mean=[0., 0., 0.], std=[255., 255.,
-                                                             255.]))
-
-train_pipeline = [  # Training data processing pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='SetValues', dictionary=dict(scale=scale)),  # Set value to destination keys
-    dict(type='PairedRandomCrop', gt_patch_size=96),  # Paired random crop
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='horizontal'),  # Flip direction
-    dict(type='Flip',  # Flip images
-        keys=['lq', 'gt'],  # Images to be flipped
-        flip_ratio=0.5,  # Flip ratio
-        direction='vertical'),  # Flip direction
-    dict(type='RandomTransposeHW',  # Random transpose h and w for images
-        keys=['lq', 'gt'],  # Images to be transposed
-        transpose_ratio=0.5  # Transpose ratio
-        ),
-    dict(type='PackInputs')  # The config of collecting data from the current pipeline
-]
-test_pipeline = [  # Test pipeline
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='img',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='LoadImageFromFile',  # Load images from files
-        key='gt',  # Keys in results to find corresponding path
-        color_type='color',  # Color type of image
-        channel_order='rgb',  # Channel order of image
-        imdecode_backend='cv2'),  # decode backend
-    dict(type='ToTensor', keys=['img', 'gt']),  # Convert images to tensor
-    dict(type='PackInputs')  # The config of collecting data from the current pipeline
-]
-
-# dataset settings
-dataset_type = 'BasicImageDataset'  # The type of dataset
-data_root = 'data'  # Root path of data
-
-train_dataloader = dict(
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    sampler=dict(type='InfiniteSampler', shuffle=True),  # The type of data sampler
-    dataset=dict(  # Train dataset config
-        type=dataset_type,  # Type of dataset
-        ann_file='meta_info_DIV2K800sub_GT.txt',  # Path of annotation file
-        metainfo=dict(dataset_type='div2k', task_name='sisr'),
-        data_root=data_root + '/DIV2K',  # Root path of data
-        data_prefix=dict(  # Prefix of image path
-            img='DIV2K_train_LR_bicubic/X2_sub', gt='DIV2K_train_HR_sub'),
-        filename_tmpl=dict(img='{}', gt='{}'),  # Filename template
-        pipeline=train_pipeline))
-val_dataloader = dict(
-    num_workers=4,  # The number of workers to pre-fetch data for each single GPU
-    persistent_workers=False,  # Whether maintain the workers Dataset instances alive
-    drop_last=False,  # Whether drop the last incomplete batch
-    sampler=dict(type='DefaultSampler', shuffle=False),  # The type of data sampler
-    dataset=dict(  # Validation dataset config
-        type=dataset_type,  # Type of dataset
-        metainfo=dict(dataset_type='set5', task_name='sisr'),
-        data_root=data_root + '/Set5',  # Root path of data
-        data_prefix=dict(img='LRbicx2', gt='GTmod12'),  # Prefix of image path
-        pipeline=test_pipeline))
-test_dataloader = val_dataloader
-
-val_evaluator = [
-    dict(type='MAE'),  # The name of metrics to evaluate
-    dict(type='PSNR', crop_border=scale),  # The name of metrics to evaluate
-    dict(type='SSIM', crop_border=scale),  # The name of metrics to evaluate
-]
-test_evaluator = val_evaluator
-
-train_cfg = dict(
-    type='IterBasedTrainLoop', max_iters=300000, val_interval=5000)  # Config of train loop type
-val_cfg = dict(type='ValLoop')  # The name of validation loop type
-test_cfg = dict(type='TestLoop')  # The name of test loop type
-
-# optimizer
-optim_wrapper = dict(
-    dict(
-        type='OptimWrapper',
-        optimizer=dict(type='Adam', lr=0.00001),
-    )
-)  # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch.
-
-param_scheduler = dict(  # Config of learning policy
-    type='MultiStepLR', by_epoch=False, milestones=[200000], gamma=0.5)
-
-default_hooks = dict(  # Used to build default hooks
-    checkpoint=dict(  # Config to set the checkpoint hook
-        type='CheckpointHook',
-        interval=5000,  # The save interval is 5000 iterations
-        save_optimizer=True,
-        by_epoch=False,  # Count by iterations
-        out_dir=save_dir,
-    ),
-    timer=dict(type='IterTimerHook'),
-    logger=dict(type='LoggerHook', interval=100),  # Config to register logger hook
-    param_scheduler=dict(type='ParamSchedulerHook'),
-    sampler_seed=dict(type='DistSamplerSeedHook'),
-)
-
-default_scope = 'mmagic'  # Used to set registries location
-save_dir = './work_dirs'  # Directory to save the model checkpoints and logs for the current experiments.
-
-env_cfg = dict(  # Parameters to setup distributed training, the port can also be set
-    cudnn_benchmark=False,
-    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=4),
-    dist_cfg=dict(backend='nccl'),
-)
-
-log_level = 'INFO'  # The level of logging
-log_processor = dict(type='LogProcessor', window_size=100, by_epoch=False)  # Used to build log processor
-
-load_from = None  # load models as a pre-trained model from a given path. This will not resume training.
-resume = False  # Resume checkpoints from a given path, the training will be resumed from the epoch when the checkpoint's is saved.
-```
diff --git a/docs/en/user_guides/dataset_prepare.md b/docs/en/user_guides/dataset_prepare.md
deleted file mode 100644
index 007c58e184..0000000000
--- a/docs/en/user_guides/dataset_prepare.md
+++ /dev/null
@@ -1,39 +0,0 @@
-# Tutorial 2: Prepare datasets
-
-In this section, we will detail how to prepare data and adopt the proper dataset in our repo for different methods.
-
-We support multiple datasets of different tasks.
-There are two ways to use datasets for training and testing models in MMagic:
-
-1. Using downloaded datasets directly
-2. Preprocessing downloaded datasets before using them.
-
-The structure of this guide is as follows:
-
-- [Tutorial 2: Prepare datasets](#tutorial-2-prepare-datasets)
-  - [Download datasets](#download-datasets)
-  - [Prepare datasets](#prepare-datasets)
-  - [The overview of the datasets in MMagic](#the-overview-of-the-datasets-in-mmagic)
-
-## Download datasets
-
-You are supposed to download datasets from their homepage first.
-Most datasets are available after downloaded, so you only need to make sure the folder structure is correct and further preparation is not necessary.
-For example, you can simply prepare Vimeo90K-triplet datasets by downloading datasets from [homepage](http://toflow.csail.mit.edu/).
-
-## Prepare datasets
-
-Some datasets need to be preprocessed before training or testing. We support many scripts to prepare datasets in [tools/dataset_converters](https://github.com/open-mmlab/mmagic/tree/main/tools/dataset_converters). And you can follow the tutorials of every dataset to run scripts.
-For example, we recommend cropping the DIV2K images to sub-images. We provide a script to prepare cropped DIV2K dataset. You can run the following command:
-
-```shell
-python tools/dataset_converters/div2k/preprocess_div2k_dataset.py --data-root ./data/DIV2K
-```
-
-## The overview of the datasets in MMagic
-
-We support detailed tutorials and split them according to different tasks.
-
-Please check our dataset zoo for data preparation of different tasks.
-
-If you're interested in more details of datasets in MMagic, please check the [advanced guides](../howto/dataset.md).
diff --git a/docs/en/user_guides/deploy.md b/docs/en/user_guides/deploy.md
deleted file mode 100644
index d14a93c9c1..0000000000
--- a/docs/en/user_guides/deploy.md
+++ /dev/null
@@ -1,159 +0,0 @@
-# Tutorial 8: Deploy models in MMagic
-
-The deployment of OpenMMLab codebases, including MMClassification, MMDetection, MMagic and so on are supported by [MMDeploy](https://github.com/open-mmlab/mmdeploy).
-The latest deployment guide for MMagic can be found from [here](https://mmdeploy.readthedocs.io/en/latest/04-supported-codebases/mmagic.html).
-
-This tutorial is organized as follows:
-
-- [Tutorial 8: Deploy models in MMagic](#tutorial-8-deploy-models-in-mmagic)
-  - [Installation](#installation)
-  - [Convert model](#convert-model)
-  - [Model specification](#model-specification)
-  - [Model inference](#model-inference)
-    - [Backend model inference](#backend-model-inference)
-    - [SDK model inference](#sdk-model-inference)
-  - [Supported models](#supported-models)
-
-## Installation
-
-Please follow the [guide](../get_started/install.md) to install mmagic. And then install mmdeploy from source by following [this](https://mmdeploy.readthedocs.io/en/latest/get_started.html#installation) guide.
-
-```{note}
-If you install mmdeploy prebuilt package, please also clone its repository by 'git clone https://github.com/open-mmlab/mmdeploy.git --depth=1' to get the deployment config files.
-```
-
-## Convert model
-
-Suppose MMagic and mmdeploy repositories are in the same directory, and the working directory is the root path of MMagic.
-
-Take [ESRGAN](../../../configs/esrgan/esrgan_psnr-x4c64b23g32_1xb16-1000k_div2k.py) model as an example.
-You can download its checkpoint from [here](https://download.openmmlab.com/MMagic/restorers/esrgan/esrgan_psnr_x4c64b23g32_1x16_1000k_div2k_20200420-bf5c993c.pth), and then convert it to onnx model as follows:
-
-```python
-from mmdeploy.apis import torch2onnx
-from mmdeploy.backend.sdk.export_info import export2SDK
-
-img = 'tests/data/image/face/000001.png'
-work_dir = 'mmdeploy_models/mmagic/onnx'
-save_file = 'end2end.onnx'
-deploy_cfg = '../mmdeploy/configs/mmagic/super-resolution/super-resolution_onnxruntime_dynamic.py'
-model_cfg = 'configs/esrgan/esrgan_psnr-x4c64b23g32_1xb16-1000k_div2k.py'
-model_checkpoint = 'esrgan_psnr_x4c64b23g32_1x16_1000k_div2k_20200420-bf5c993c.pth'
-device = 'cpu'
-
-# 1. convert model to onnx
-torch2onnx(img, work_dir, save_file, deploy_cfg, model_cfg,
-  model_checkpoint, device)
-
-# 2. extract pipeline info for inference by MMDeploy SDK
-export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)
-```
-
-It is crucial to specify the correct deployment config during model conversion.MMDeploy has already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmagic) of all supported backends for mmagic, under which the config file path follows the pattern:
-
-```
-{task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}.py
-```
-
-- **{task}:** task in mmagic.
-
-- **{backend}:** inference backend, such as onnxruntime, tensorrt, pplnn, ncnn, openvino, coreml etc.
-
-- **{precision}:** fp16, int8. When it's empty, it means fp32
-
-- **{static | dynamic}:** static shape or dynamic shape
-
-- **{shape}:** input shape or shape range of a model
-
-Therefore, in the above example, you can also convert `ESRGAN` to other backend models by changing the deployment config file, e.g., converting to tensorrt-fp16 model by `super-resolution_tensorrt-fp16_dynamic-32x32-512x512.py`.
-
-```{tip}
-When converting mmagic models to tensorrt models, --device should be set to "cuda"
-```
-
-## Model specification
-
-Before moving on to model inference chapter, let's know more about the converted model structure which is very important for model inference.
-
-The converted model locates in the working directory like `mmdeploy_models/mmagic/onnx` in the previous example. It includes:
-
-```
-mmdeploy_models/mmagic/onnx
-├── deploy.json
-├── detail.json
-├── end2end.onnx
-└── pipeline.json
-```
-
-in which,
-
-- **end2end.onnx**: backend model which can be inferred by ONNX Runtime
-- ***xxx*.json**: the necessary information for mmdeploy SDK
-
-The whole package **mmdeploy_models/mmagic/onnx** is defined as **mmdeploy SDK model**, i.e., **mmdeploy SDK model** includes both backend model and inference meta information.
-
-## Model inference
-
-### Backend model inference
-
-Take the previous converted `end2end.onnx` model as an example, you can use the following code to inference the model.
-
-```python
-from mmdeploy.apis.utils import build_task_processor
-from mmdeploy.utils import get_input_shape, load_config
-import torch
-
-deploy_cfg = '../mmdeploy/configs/mmagic/super-resolution/super-resolution_onnxruntime_dynamic.py'
-model_cfg = 'configs/esrgan/esrgan_psnr-x4c64b23g32_1xb16-1000k_div2k.py'
-device = 'cpu'
-backend_model = ['mmdeploy_models/mmagic/onnx/end2end.onnx']
-image = 'tests/data/image/lq/baboon_x4.png'
-
-# read deploy_cfg and model_cfg
-deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg)
-
-# build task and backend model
-task_processor = build_task_processor(model_cfg, deploy_cfg, device)
-model = task_processor.build_backend_model(backend_model)
-
-# process input image
-input_shape = get_input_shape(deploy_cfg)
-model_inputs, _ = task_processor.create_input(image, input_shape)
-
-# do model inference
-with torch.no_grad():
-    result = model.test_step(model_inputs)
-
-# visualize results
-task_processor.visualize(
-    image=image,
-    model=model,
-    result=result[0],
-    window_name='visualize',
-    output_file='output_restorer.bmp')
-```
-
-### SDK model inference
-
-You can also perform SDK model inference like following,
-
-```python
-from mmdeploy_python import Restorer
-import cv2
-
-img = cv2.imread('tests/data/image/lq/baboon_x4.png')
-
-# create a predictor
-restorer = Restorer(model_path='mmdeploy_models/mmagic/onnx', device_name='cpu', device_id=0)
-# perform inference
-result = restorer(img)
-
-# visualize inference result
-cv2.imwrite('output_restorer.bmp', result)
-```
-
-Besides python API, MMDeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
-
-## Supported models
-
-Please refer to [here](https://mmdeploy.readthedocs.io/en/latest/04-supported-codebases/mmagic.html#supported-models) for the supported model list.
diff --git a/docs/en/user_guides/inference.md b/docs/en/user_guides/inference.md
deleted file mode 100644
index 1534a49d69..0000000000
--- a/docs/en/user_guides/inference.md
+++ /dev/null
@@ -1,388 +0,0 @@
-# Tutorial 3: Inference with pre-trained models
-
-MMagic provides Hign-level APIs for you to easily play with state-of-the-art models on your own images or videos.
-
-In the new API, only two lines of code are needed to implement inference:
-
-```python
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance
-editor = MMagicInferencer('pix2pix')
-# Infer a image. Input image path and output image path is needed.
-results = editor.infer(img='../resources/input/translation/gt_mask_0.png', result_out_dir='../resources/output/translation/tutorial_translation_pix2pix_res.jpg')
-```
-
-MMagic supports various fundamental generative models, including:
-
-unconditional Generative Adversarial Networks (GANs), conditional GANs, diffusion models, etc.
-
-MMagic also supports various applications, including:
-
-text-to-image, image-to-image translation, 3D-aware generation, image super-resolution, video super-resolution, video frame interpolation, image inpainting, image matting, image restoration, image colorization, image generation, etc.
-
-In this section, we will specify how to play with our pre-trained models.
-
-- [Tutorial 3: Inference with Pre-trained Models](#tutorial-3-inference-with-pre-trained-models)
-  - [Prepare some images or videos for inference](#Prepare-some-images-or-videos-for-inference)
-  - [Generative Models](#Generative-Models)
-    - [Unconditional Generative Adversarial Networks (GANs)](<#Unconditional-Generative-Adversarial-Networks-(GANs)>)
-    - [Conditional Generative Adversarial Networks (GANs)](<#Conditional-Generative-Adversarial-Networks-(GANs)>)
-    - [Diffusion Models](#Diffusion-Models)
-  - [Applications](#Applications)
-    - [Text-to-Image](#Text-to-Image)
-    - [Image-to-image translation](#Image-to-image-translation)
-    - [3D-aware generation](#3D-aware-generation)
-    - [Image super-resolution](#Image-super-resolution)
-    - [Video super-resolution](#Video-super-resolution)
-    - [Video frame interpolation](Video-frame-interpolation)
-    - [Image inpainting](#Image-inpainting)
-    - [Image matting](#Image-matting)
-    - [Image restoration](#Image-restoration)
-    - [Image colorization](#Image-colorization)
-- [Previous Versions](#Previous-Versions)
-
-## Prepare some images or videos for inference
-
-Please refer to our [tutorials](https://github.com/open-mmlab/mmagic/blob/main/demo/mmagic_inference_tutorial.ipynb) for details.
-
-## Generative Models
-
-### Unconditional Generative Adversarial Networks (GANs)
-
-MMagic provides high-level APIs for sampling images with unconditional GANs. Unconditional GAN models do not need input, and output a image. We take 'styleganv1' as an example.
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-result_out_dir = './resources/output/unconditional/tutorial_unconditional_styleganv1_res.png'
-editor = MMagicInferencer('styleganv1')
-results = editor.infer(result_out_dir=result_out_dir)
-```
-
-Indeed, we have already provided a more friendly demo script to users. You can use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name styleganv1 \
-        --result-out-dir demo_unconditional_styleganv1_res.jpg
-```
-
-### Conditional Generative Adversarial Networks (GANs)
-
-MMagic provides high-level APIs for sampling images with conditional GANs. Conditional GAN models take a label as input and output a image. We take 'biggan' as an example..
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-result_out_dir = './resources/output/conditional/tutorial_conditinal_biggan_res.jpg'
-editor = MMagicInferencer('biggan', model_setting=1)
-results = editor.infer(label=1, result_out_dir=result_out_dir)
-```
-
-Indeed, we have already provided a more friendly demo script to users. You can use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name biggan \
-        --model-setting 1 \
-        --label 1 \
-        --result-out-dir demo_conditional_biggan_res.jpg
-```
-
-### Diffusion Models
-
-MMagic provides high-level APIs for sampling images with diffusion models. f
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-editor = MMagicInferencer(model_name='stable_diffusion')
-text_prompts = 'A panda is having dinner at KFC'
-result_out_dir = './resources/output/text2image/tutorial_text2image_sd_res.png'
-editor.infer(text=text_prompts, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name stable_diffusion \
-        --text "A panda is having dinner at KFC" \
-        --result-out-dir demo_text2image_stable_diffusion_res.png
-```
-
-## Applications
-
-### Text-to-Image
-
-Text-to-image models take text as input, and output a image. We take 'controlnet-canny' as an example.
-
-```python
-import cv2
-import numpy as np
-import mmcv
-from mmengine import Config
-from PIL import Image
-
-from mmagic.registry import MODELS
-from mmagic.utils import register_all_modules
-
-register_all_modules()
-
-cfg = Config.fromfile('configs/controlnet/controlnet-canny.py')
-controlnet = MODELS.build(cfg.model).cuda()
-
-control_url = 'https://user-images.githubusercontent.com/28132635/230288866-99603172-04cb-47b3-8adb-d1aa532d1d2c.jpg'
-control_img = mmcv.imread(control_url)
-control = cv2.Canny(control_img, 100, 200)
-control = control[:, :, None]
-control = np.concatenate([control] * 3, axis=2)
-control = Image.fromarray(control)
-
-prompt = 'Room with blue walls and a yellow ceiling.'
-
-output_dict = controlnet.infer(prompt, control=control)
-samples = output_dict['samples']
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name controlnet \
-        --model-setting 1 \
-        --text "Room with blue walls and a yellow ceiling." \
-        --control 'https://user-images.githubusercontent.com/28132635/230297033-4f5c32df-365c-4cf4-8e4f-1b76a4cbb0b7.png' \
-        --result-out-dir demo_text2image_controlnet_canny_res.png
-```
-
-### Image-to-image translation
-
-MMagic provides high-level APIs for translating images by using image translation models. Here is an example of building Pix2Pix and obtaining the translated images.
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-editor = MMagicInferencer('pix2pix')
-results = editor.infer(img=img_path, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name pix2pix \
-        --img ${IMAGE_PATH} \
-        --result-out-dir ${SAVE_PATH}
-```
-
-### 3D-aware generation
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-result_out_dir = './resources/output/eg3d-output'
-editor = MMagicInferencer('eg3d')
-results = editor.infer(result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-    --model-name eg3d \
-    --result-out-dir ./resources/output/eg3d-output
-```
-
-### Image super-resolution
-
-Image super resolution models take a image as input, and output a high resolution image. We take 'esrgan' as an example.
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-img = './resources/input/restoration/0901x2.png'
-result_out_dir = './resources/output/restoration/tutorial_restoration_esrgan_res.png'
-editor = MMagicInferencer('esrgan')
-results = editor.infer(img=img, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name esrgan \
-        --img ${IMAGE_PATH} \
-        --result-out-dir ${SAVE_PATH}
-```
-
-### Video super-resolution
-
-```python
-import os
-from mmagic.apis import MMagicInferencer
-from mmengine import mkdir_or_exist
-
-# Create a MMagicInferencer instance and infer
-video = './resources/input/video_interpolation/b-3LLDhc4EU_000000_000010.mp4'
-result_out_dir = './resources/output/video_super_resolution/tutorial_video_super_resolution_basicvsr_res.mp4'
-mkdir_or_exist(os.path.dirname(result_out_dir))
-editor = MMagicInferencer('basicvsr')
-results = editor.infer(video=video, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name basicvsr \
-        --video ./resources/input/video_restoration/QUuC4vJs_000084_000094_400x320.mp4 \
-        --result-out-dir ./resources/output/video_restoration/demo_video_restoration_basicvsr_res.mp4
-```
-
-### Video frame interpolation
-
-Video interpolation models take a video as input, and output a interpolated video. We take 'flavr' as an example.
-
-```python
-import os
-from mmagic.apis import MMagicInferencer
-from mmengine import mkdir_or_exist
-
-# Create a MMagicInferencer instance and infer
-video = './resources/input/video_interpolation/b-3LLDhc4EU_000000_000010.mp4'
-result_out_dir = './resources/output/video_interpolation/tutorial_video_interpolation_flavr_res.mp4'
-mkdir_or_exist(os.path.dirname(result_out_dir))
-editor = MMagicInferencer('flavr')
-results = editor.infer(video=video, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name flavr \
-        --video ${VIDEO_PATH} \
-        --result-out-dir ${SAVE_PATH}
-```
-
-### Image inpainting
-
-Inpaiting models take a masked image and mask pair as input, and output a inpainted image. We take 'global_local' as an example.
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-img = './resources/input/inpainting/celeba_test.png'
-mask = './resources/input/inpainting/bbox_mask.png'
-
-# Create a MMagicInferencer instance and infer
-result_out_dir = './resources/output/inpainting/tutorial_inpainting_global_local_res.jpg'
-editor = MMagicInferencer('global_local', model_setting=1)
-results = editor.infer(img=img, mask=mask, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name global_local  \
-        --img ./resources/input/inpainting/celeba_test.png \
-        --mask ./resources/input/inpainting/bbox_mask.png \
-        --result-out-dir ./resources/output/inpainting/demo_inpainting_global_local_res.jpg
-```
-
-### Image matting
-
-Inpaiting models take a image and trimap pair as input, and output a alpha image. We take 'gca' as an example.
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-img = './resources/input/matting/GT05.jpg'
-trimap = './resources/input/matting/GT05_trimap.jpg'
-
-# Create a MMagicInferencer instance and infer
-result_out_dir = './resources/output/matting/tutorial_matting_gca_res.png'
-editor = MMagicInferencer('gca')
-results = editor.infer(img=img, trimap=trimap, result_out_dir=result_out_dir)
-```
-
-Use [demo/mmagic_inference_demo.py](../../../demo/mmagic_inference_demo.py) with the following commands:
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name gca  \
-        --img ./resources/input/matting/GT05.jpg \
-        --trimap ./resources/input/matting/GT05_trimap.jpg \
-        --result-out-dir ./resources/output/matting/demo_matting_gca_res.png
-```
-
-### Image restoration
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-img = './resources/input/restoration/0901x2.png'
-result_out_dir = './resources/output/restoration/tutorial_restoration_nafnet_res.png'
-editor = MMagicInferencer('nafnet')
-results = editor.infer(img=img, result_out_dir=result_out_dir)
-```
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name nafnet \
-        --img ./resources/input/restoration/0901x2.png \
-        --result-out-dir ./resources/output/restoration/demo_restoration_nafnet_res.png
-```
-
-### Image colorization
-
-```python
-import mmcv
-import matplotlib.pyplot as plt
-from mmagic.apis import MMagicInferencer
-
-# Create a MMagicInferencer instance and infer
-img = 'https://github-production-user-asset-6210df.s3.amazonaws.com/49083766/245713512-de973677-2be8-4915-911f-fab90bb17c40.jpg'
-result_out_dir = './resources/output/colorization/tutorial_colorization_res.png'
-editor = MMagicInferencer('inst_colorization')
-results = editor.infer(img=img, result_out_dir=result_out_dir)
-```
-
-```shell
-python demo/mmagic_inference_demo.py \
-        --model-name inst_colorization \
-        --img https://github-production-user-asset-6210df.s3.amazonaws.com/49083766/245713512-de973677-2be8-4915-911f-fab90bb17c40.jpg \
-        --result-out-dir demo_colorization_res.png
-```
-
-## Previous Versions
-
-If you want to use deprecated demos, please use [MMagic v1.0.0rc7](https://github.com/open-mmlab/mmagic/tree/v1.0.0rc7) and reference the [old tutorial](https://github.com/open-mmlab/mmagic/blob/v1.0.0rc7/docs/en/user_guides/inference.md).
diff --git a/docs/en/user_guides/metrics.md b/docs/en/user_guides/metrics.md
deleted file mode 100644
index 7dbe6bc24c..0000000000
--- a/docs/en/user_guides/metrics.md
+++ /dev/null
@@ -1,316 +0,0 @@
-# Tutorial 5: Using metrics in MMagic
-
-MMagic supports **17 metrics** to assess the quality of models.
-
-Please refer to [Train and Test in MMagic](../user_guides/train_test.md) for usages.
-
-Here, we will specify the details of different metrics one by one.
-
-The structure of this guide are as follows:
-
-- [Tutorial 5: Using metrics in MMagic](#tutorial-5-using-metrics-in-mmagic)
-  - [MAE](#mae)
-  - [MSE](#mse)
-  - [PSNR](#psnr)
-  - [SNR](#snr)
-  - [SSIM](#ssim)
-  - [NIQE](#niqe)
-  - [SAD](#sad)
-  - [MattingMSE](#mattingmse)
-  - [GradientError](#gradienterror)
-  - [ConnectivityError](#connectivityerror)
-  - [FID and TransFID](#fid-and-transfid)
-  - [IS and TransIS](#is-and-transis)
-  - [Precision and Recall](#precision-and-recall)
-  - [PPL](#ppl)
-  - [SWD](#swd)
-  - [MS-SSIM](#ms-ssim)
-  - [Equivarience](#equivarience)
-
-## MAE
-
-MAE is Mean Absolute Error metric for image.
-To evaluate with MAE, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='MAE'),
-]
-```
-
-## MSE
-
-MSE is Mean Squared Error metric for image.
-To evaluate with MSE, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='MSE'),
-]
-```
-
-## PSNR
-
-PSNR is Peak Signal-to-Noise Ratio. Our implement refers to https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio.
-To evaluate with PSNR, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='PSNR'),
-]
-```
-
-## SNR
-
-SNR is Signal-to-Noise Ratio. Our implementation refers to https://en.wikipedia.org/wiki/Signal-to-noise_ratio.
-To evaluate with SNR, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='SNR'),
-]
-```
-
-## SSIM
-
-SSIM is structural similarity for image, proposed in [Image quality assessment: from error visibility to structural similarity](https://live.ece.utexas.edu/publications/2004/zwang_ssim_ieeeip2004.pdf). The results of our implementation are the same as that of the official released MATLAB code in https://ece.uwaterloo.ca/~z70wang/research/ssim/.
-To evaluate with SSIM, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='SSIM'),
-]
-```
-
-## NIQE
-
-NIQE is Natural Image Quality Evaluator metric, proposed in [Making a "Completely Blind" Image Quality Analyzer](http://www.live.ece.utexas.edu/publications/2013/mittal2013.pdf). Our implementation could produce almost the same results as the official MATLAB codes: http://live.ece.utexas.edu/research/quality/niqe_release.zip.
-
-To evaluate with NIQE, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='NIQE'),
-]
-```
-
-## SAD
-
-SAD is Sum of Absolute Differences metric for image matting. This metric compute per-pixel absolute difference and sum across all pixels.
-To evaluate with SAD, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='SAD'),
-]
-```
-
-## MattingMSE
-
-MattingMSE is Mean Squared Error metric for image matting.
-To evaluate with MattingMSE, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='MattingMSE'),
-]
-```
-
-## GradientError
-
-GradientError is Gradient error for evaluating alpha matte prediction.
-To evaluate with GradientError, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='GradientError'),
-]
-```
-
-## ConnectivityError
-
-ConnectivityError is Connectivity error for evaluating alpha matte prediction.
-To evaluate with ConnectivityError, please add the following configuration in the config file:
-
-```python
-val_evaluator = [
-    dict(type='ConnectivityError'),
-]
-```
-
-## FID and TransFID
-
-Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.
-
-In `MMagic`, we provide two versions for FID calculation. One is the commonly used PyTorch version and the other one is used in StyleGAN paper. Meanwhile, we have compared the difference between these two implementations in the StyleGAN2-FFHQ1024 model (the details can be found [here](https://github.com/open-mmlab/mmagic/blob/main/configs/styleganv2/README.md)). Fortunately, there is a marginal difference in the final results. Thus, we recommend users adopt the more convenient PyTorch version.
-
-**About PyTorch version and Tero's version:** The commonly used PyTorch version adopts the modified InceptionV3 network to extract features for real and fake images. However, Tero's FID requires a [script module](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt) for Tensorflow InceptionV3. Note that applying this script module needs `PyTorch >= 1.6.0`.
-
-**About extracting real inception data:** For the users' convenience, the real features will be automatically extracted at test time and saved locally, and the stored features will be automatically read at the next test. Specifically, we will calculate a hash value based on the parameters used to calculate the real features, and use the hash value to mark the feature file, and when testing, if the `inception_pkl` is not set, we will look for the feature in `MMAGIC_CACHE_DIR` (~/.cache/openmmlab/mmagic/). If cached inception pkl is not found, then extracting will be performed.
-
-To use the FID metric, you should add the metric in a config file like this:
-
-```python
-metrics = [
-    dict(
-        type='FrechetInceptionDistance',
-        prefix='FID-Full-50k',
-        fake_nums=50000,
-        inception_style='StyleGAN',
-        sample_model='ema')
-]
-```
-
-If you work on an new machine, then you can copy the `pkl` files in `MMAGIC_CACHE_DIR` and copy them to new machine and set `inception_pkl` field.
-
-```python
-metrics = [
-    dict(
-        type='FrechetInceptionDistance',
-        prefix='FID-Full-50k',
-        fake_nums=50000,
-        inception_style='StyleGAN',
-        inception_pkl=
-        'work_dirs/inception_pkl/inception_state-capture_mean_cov-full-33ad4546f8c9152e4b3bdb1b0c08dbaf.pkl',  # copied from old machine
-        sample_model='ema')
-]
-```
-
-`TransFID` has same usage as `FID`, but it's designed for translation models like `Pix2Pix` and `CycleGAN`, which is adapted for our evaluator. You can refer
-to [evaluation](../user_guides/train_test.md) for details.
-
-## IS and TransIS
-
-Inception score is an objective metric for evaluating the quality of generated images, proposed in [Improved Techniques for Training GANs](https://arxiv.org/pdf/1606.03498.pdf). It uses an InceptionV3 model to predict the class of the generated images, and suppose that 1) If an image is of high quality, it will be categorized into a specific class. 2) If images are of high diversity, the range of images' classes will be wide. So the KL-divergence of the conditional probability and marginal probability can indicate the quality and diversity of generated images. You can see the complete implementation in `metrics.py`, which refers to https://github.com/sbarratt/inception-score-pytorch/blob/master/inception_score.py.
-If you want to evaluate models with `IS` metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/biggan/biggan_2xb25-500kiters_cifar10-32x32.py
-metrics = [
-    xxx,
-    dict(
-        type='IS',
-        prefix='IS-50k',
-        fake_nums=50000,
-        inception_style='StyleGAN',
-        sample_model='ema')
-]
-```
-
-To be noted that, the selection of Inception V3 and image resize method can significantly influence the final IS score. Therefore, we strongly recommend users may download the [Tero's script model of Inception V3](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt) (load this script model need torch >= 1.6) and use `Bicubic` interpolation with `Pillow` backend.
-
-Corresponding to config, you can set `resize_method` and `use_pillow_resize` for image resizing. You can also set `inception_style` as `StyleGAN` for recommended tero's inception model, or `PyTorch` for torchvision's implementation. For environment without internet, you can download the inception's weights, and set `inception_path` to your inception model.
-
-We also perform a survey on the influence of data loading pipeline and the version of pretrained Inception V3 on the IS result. All IS are evaluated on the same group of images which are randomly selected from the ImageNet dataset.
-
-<details> <summary> Show the Comparison Results </summary>
-
-|                            Code Base                            | Inception V3 Version | Data Loader Backend | Resize Interpolation Method |          IS           |
-| :-------------------------------------------------------------: | :------------------: | :-----------------: | :-------------------------: | :-------------------: |
-|   [OpenAI (baseline)](https://github.com/openai/improved-gan)   |      Tensorflow      |       Pillow        |       Pillow Bicubic        | **312.255 +/- 4.970** |
-| [StyleGAN-Ada](https://github.com/NVlabs/stylegan2-ada-pytorch) | Tero's Script Model  |       Pillow        |       Pillow Bicubic        |   311.895 +/ 4.844    |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |         cv2         |        cv2 Bilinear         |   322.932 +/- 2.317   |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |         cv2         |         cv2 Bicubic         |   324.604 +/- 5.157   |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |         cv2         |       Pillow Bicubic        |   318.161 +/- 5.330   |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |       Pillow        |       Pillow Bilinear       |   313.126 +/- 5.449   |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |       Pillow        |        cv2 Bilinear         |    318.021+/-3.864    |
-|                          mmagic (Ours)                          |  Pytorch Pretrained  |       Pillow        |       Pillow Bicubic        |   317.997 +/- 5.350   |
-|                          mmagic (Ours)                          | Tero's Script Model  |         cv2         |        cv2 Bilinear         |   318.879 +/- 2.433   |
-|                          mmagic (Ours)                          | Tero's Script Model  |         cv2         |         cv2 Bicubic         |   316.125 +/- 5.718   |
-|                          mmagic (Ours)                          | Tero's Script Model  |         cv2         |       Pillow Bicubic        | **312.045 +/- 5.440** |
-|                          mmagic (Ours)                          | Tero's Script Model  |       Pillow        |       Pillow Bilinear       |   308.645 +/- 5.374   |
-|                          mmagic (Ours)                          | Tero's Script Model  |       Pillow        |       Pillow Bicubic        |   311.733 +/- 5.375   |
-
-</details>
-
-`TransIS` has same usage as `IS`, but it's designed for translation models like `Pix2Pix` and `CycleGAN`, which is adapted for our evaluator. You can refer
-to [evaluation](../user_guides/train_test.md) for details.
-
-## Precision and Recall
-
-Our `Precision and Recall` implementation follows the version used in StyleGAN2. In this metric, a VGG network will be adopted to extract the features for images. Unfortunately, we have not found a PyTorch VGG implementation leading to similar results with Tero's version used in StyleGAN2. (About the differences, please see this [file](https://github.com/open-mmlab/mmagic/blob/main/configs/styleganv2/README.md).) Thus, in our implementation, we adopt [Teor's VGG](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt) network by default. Importantly, applying this script module needs `PyTorch >= 1.6.0`. If with a lower PyTorch version, we will use the PyTorch official VGG network for feature extraction.
-
-To evaluate with `P&R`, please add the following configuration in the config file:
-
-```python
-metrics = [
-    dict(type='PrecisionAndRecall', fake_nums=50000, prefix='PR-50K')
-]
-```
-
-## PPL
-
-Perceptual path length measures the difference between consecutive images (their VGG16 embeddings) when interpolating between two random inputs. Drastic changes mean that multiple features have changed together and that they might be entangled. Thus, a smaller PPL score appears to indicate higher overall image quality by experiments. \
-As a basis for our metric, we use a perceptually-based pairwise image distance that is calculated as a weighted difference between two VGG16 embeddings, where the weights are fit so that the metric agrees with human perceptual similarity judgments.
-If we subdivide a latent space interpolation path into linear segments, we can define the total perceptual length of this segmented path as the sum of perceptual differences over each segment, and a natural definition for the perceptual path length would be the limit of this sum under infinitely fine subdivision, but in practice we approximate it using a small subdivision `` $`\epsilon=10^{-4}`$ ``.
-The average perceptual path length in latent `space` Z, over all possible endpoints, is therefore
-
-`` $$`L_Z = E[\frac{1}{\epsilon^2}d(G(slerp(z_1,z_2;t))), G(slerp(z_1,z_2;t+\epsilon)))]`$$ ``
-
-Computing the average perceptual path length in latent `space` W is carried out in a similar fashion:
-
-`` $$`L_Z = E[\frac{1}{\epsilon^2}d(G(slerp(z_1,z_2;t))), G(slerp(z_1,z_2;t+\epsilon)))]`$$ ``
-
-Where `` $`z_1, z_2 \sim P(z)`$ ``, and `` $` t \sim U(0,1)`$ `` if we set `sampling` to full, `` $` t \in \{0,1\}`$ `` if we set `sampling` to end. `` $` G`$ `` is the generator(i.e. `` $` g \circ f`$ `` for style-based networks), and `` $` d(.,.)`$ `` evaluates the perceptual distance between the resulting images.We compute the expectation by taking 100,000 samples (set `num_images` to 50,000 in our code).
-
-You can find the complete implementation in `metrics.py`, which refers to https://github.com/rosinality/stylegan2-pytorch/blob/master/ppl.py.
-If you want to evaluate models with `PPL` metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/styleganv2/stylegan2_c2_ffhq_1024_b4x8.py
-metrics = [
-    xxx,
-    dict(type='PerceptualPathLength', fake_nums=50000, prefix='ppl-w')
-]
-```
-
-## SWD
-
-Sliced Wasserstein distance is a discrepancy measure for probability distributions, and smaller distance indicates generated images look like the real ones. We obtain the Laplacian pyramids of every image and extract patches from the Laplacian pyramids as descriptors, then SWD can be calculated by taking the sliced Wasserstein distance of the real and fake descriptors.
-You can see the complete implementation in `metrics.py`, which refers to https://github.com/tkarras/progressive_growing_of_gans/blob/master/metrics/sliced_wasserstein.py.
-If you want to evaluate models with `SWD` metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/dcgan/dcgan_1xb128-5epoches_lsun-bedroom-64x64.py
-metrics = [
-    dict(
-        type='SWD',
-        prefix='swd',
-        fake_nums=16384,
-        sample_model='orig',
-        image_shape=(3, 64, 64))
-]
-```
-
-## MS-SSIM
-
-Multi-scale structural similarity is used to measure the similarity of two images. We use MS-SSIM here to measure the diversity of generated images, and a low MS-SSIM score indicates the high diversity of generated images. You can see the complete implementation in `metrics.py`, which refers to https://github.com/tkarras/progressive_growing_of_gans/blob/master/metrics/ms_ssim.py.
-If you want to evaluate models with `MS-SSIM` metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/dcgan/dcgan_1xb128-5epoches_lsun-bedroom-64x64.py
-metrics = [
-    dict(
-        type='MS_SSIM', prefix='ms-ssim', fake_nums=10000,
-        sample_model='orig')
-]
-```
-
-## Equivarience
-
-Equivarience of generative models refer to the exchangeability of model forward and geometric transformations. Currently this metric is only calculated for StyleGANv3,
-you can see the complete implementation in `metrics.py`, which refers to https://github.com/NVlabs/stylegan3/blob/main/metrics/equivariance.py.
-If you want to evaluate models with `Equivarience` metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/styleganv3/stylegan3-t_gamma2.0_8xb4-fp16-noaug_ffhq-256x256.py
-metrics = [
-    dict(
-        type='Equivariance',
-        fake_nums=50000,
-        sample_mode='ema',
-        prefix='EQ',
-        eq_cfg=dict(
-            compute_eqt_int=True, compute_eqt_frac=True, compute_eqr=True))
-]
-```
diff --git a/docs/en/user_guides/train_test.md b/docs/en/user_guides/train_test.md
deleted file mode 100644
index 350b2b4212..0000000000
--- a/docs/en/user_guides/train_test.md
+++ /dev/null
@@ -1,216 +0,0 @@
-# Tutorial 4: Train and test in MMagic
-
-In this section, we introduce how to test and train models in MMagic.
-
-In this section, we provide the following guides:
-
-- [Tutorial 4: Train and test in MMagic](#tutorial-4-train-and-test-in-mmagic)
-  - [Prerequisite](#prerequisite)
-  - [Test a model in MMagic](#test-a-model-in-mmagic)
-    - [Test with a single GPUs](#test-with-a-single-gpus)
-    - [Test with multiple GPUs](#test-with-multiple-gpus)
-    - [Test with Slurm](#test-with-slurm)
-    - [Test with specific metrics](#test-with-specific-metrics)
-  - [Train a model in MMagic](#train-a-model-in-mmagic)
-    - [Train with a single GPU](#train-with-a-single-gpu)
-    - [Train with multiple nodes](#train-with-multiple-nodes)
-    - [Train with multiple GPUs](#train-with-multiple-gpus)
-    - [Train with Slurm](#train-with-slurm)
-    - [Optional arguments](#optional-arguments)
-  - [Train with specific evaluation metrics](#train-with-specific-evaluation-metrics)
-
-## Prerequisite
-
-Users need to [prepare dataset](../user_guides/dataset_prepare.md) first to enable training and testing models in MMagic.
-
-## Test a model in MMagic
-
-### Test with a single GPUs
-
-You can use the following commands to test a pre-trained model with single GPUs.
-
-```shell
-python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE}
-```
-
-For example,
-
-```shell
-python tools/test.py configs/example_config.py work_dirs/example_exp/example_model_20200202.pth
-```
-
-### Test with multiple GPUs
-
-MMagic supports testing with multiple GPUs,
-which can largely save your time in testing models.
-You can use the following commands to test a pre-trained model with multiple GPUs.
-
-```shell
-./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM}
-```
-
-For example,
-
-```shell
-./tools/dist_test.sh configs/example_config.py work_dirs/example_exp/example_model_20200202.pth
-```
-
-### Test with Slurm
-
-If you run MMagic on a cluster managed with [slurm](https://slurm.schedmd.com/), you can use the script `slurm_test.sh`. (This script also supports single machine testing.)
-
-```shell
-[GPUS=${GPUS}] ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${CHECKPOINT_FILE}
-```
-
-Here is an example of using 8 GPUs to test an example model on the 'dev' partition with the job name 'test'.
-
-```shell
-GPUS=8 ./tools/slurm_test.sh dev test configs/example_config.py work_dirs/example_exp/example_model_20200202.pth
-```
-
-You can check [slurm_test.sh](../../../tools/slurm_test.sh) for full arguments and environment variables.
-
-### Test with specific metrics
-
-MMagic provides various **evaluation metrics**, i.e., MS-SSIM, SWD, IS, FID, Precision&Recall, PPL, Equivarience, TransFID, TransIS, etc.
-We have provided unified evaluation scripts in [tools/test.py](https://github.com/open-mmlab/mmagic/tree/main/tools/test.py) for all models.
-If users want to evaluate their models with some metrics, you can add the `metrics` into your config file like this:
-
-```python
-# at the end of the configs/styleganv2/stylegan2_c2_ffhq_256_b4x8_800k.py
-metrics = [
-    dict(
-        type='FrechetInceptionDistance',
-        prefix='FID-Full-50k',
-        fake_nums=50000,
-        inception_style='StyleGAN',
-        sample_model='ema'),
-    dict(type='PrecisionAndRecall', fake_nums=50000, prefix='PR-50K'),
-    dict(type='PerceptualPathLength', fake_nums=50000, prefix='ppl-w')
-]
-```
-
-As above, `metrics` consist of multiple metric dictionaries. Each metric will contain `type` to indicate the category of the metric. `fake_nums` denotes the number of images generated by the model. Some metrics will output a dictionary of results, you can also set `prefix`  to specify the prefix of the results.
-If you set the prefix of FID as `FID-Full-50k`, then an example of output may be
-
-```bash
-FID-Full-50k/fid: 3.6561  FID-Full-50k/mean: 0.4263  FID-Full-50k/cov: 3.2298
-```
-
-Then users can test models with the command below:
-
-```shell
-bash tools/dist_test.sh ${CONFIG_FILE} ${CKPT_FILE}
-```
-
-If you are in slurm environment, please switch to the [tools/slurm_test.sh](https://github.com/open-mmlab/mmagic/tree/main/tools/slurm_test.sh) by using the following commands:
-
-```shell
-sh slurm_test.sh ${PLATFORM} ${JOBNAME} ${CONFIG_FILE} ${CKPT_FILE}
-```
-
-## Train a model in MMagic
-
-MMagic supports multiple ways of training:
-
-1. [Train with a single GPU](#train-with-a-single-gpu)
-2. [Train with multiple GPUs](#train-with-multiple-gpus)
-3. [Train with multiple nodes](#train-with-multiple-nodes)
-4. [Train with Slurm](#train-with-slurm)
-
-Specifically, all outputs (log files and checkpoints) will be saved to the working directory,
-which is specified by `work_dir` in the config file.
-
-### Train with a single GPU
-
-```shell
-CUDA_VISIBLE=0 python tools/train.py configs/example_config.py --work-dir work_dirs/example
-```
-
-### Train with multiple nodes
-
-To launch distributed training on multiple machines, which can be accessed via IPs, run the following commands:
-
-On the first machine:
-
-```shell
-NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR tools/dist_train.sh $CONFIG $GPUS
-```
-
-On the second machine:
-
-```shell
-NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR tools/dist_train.sh $CONFIG $GPUS
-```
-
-To speed up network communication, high speed network hardware, such as Infiniband, is recommended.
-Please refer to [PyTorch docs](https://pytorch.org/docs/1.11/distributed.html#launch-utility) for more information.
-
-### Train with multiple GPUs
-
-```shell
-./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [optional arguments]
-```
-
-### Train with Slurm
-
-If you run MMagic on a cluster managed with [slurm](https://slurm.schedmd.com/), you can use the script `slurm_train.sh`. (This script also supports single machine training.)
-
-```shell
-[GPUS=${GPUS}] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR}
-```
-
-Here is an example of using 8 GPUs to train an inpainting model on the dev partition.
-
-```shell
-GPUS=8 ./tools/slurm_train.sh dev configs/inpainting/gl_places.py /nfs/xxxx/gl_places_256
-```
-
-You can check [slurm_train.sh](https://github.com/open-mmlab/mmagic/blob/master/tools/slurm_train.sh) for full arguments and environment variables.
-
-### Optional arguments
-
-- `--amp`: This argument is used for fixed-precision training.
-- `--resume`: This argument is used for auto resume if the training is aborted.
-
-## Train with specific evaluation metrics
-
-Benefit from the `mmengine`'s `Runner`. We can evaluate model during training in a simple way as below.
-
-```python
-# define metrics
-metrics = [
-    dict(
-        type='FrechetInceptionDistance',
-        prefix='FID-Full-50k',
-        fake_nums=50000,
-        inception_style='StyleGAN')
-]
-
-# define dataloader
-val_dataloader = dict(
-    batch_size=128,
-    num_workers=8,
-    dataset=dict(
-        type='BasicImageDataset',
-        data_root='data/celeba-cropped/',
-        pipeline=[
-            dict(type='LoadImageFromFile', key='img'),
-            dict(type='Resize', scale=(64, 64)),
-            dict(type='PackInputs')
-        ]),
-    sampler=dict(type='DefaultSampler', shuffle=False),
-    persistent_workers=True)
-
-# define val interval
-train_cfg = dict(by_epoch=False, val_begin=1, val_interval=10000)
-
-# define val loop and evaluator
-val_cfg = dict(type='MultiValLoop')
-val_evaluator = dict(type='Evaluator', metrics=metrics)
-```
-
-You can set `val_begin` and `val_interval` to adjust when to begin validation and interval of validation.
-
-For details of metrics, refer to [metrics' guide](./metrics.md).
diff --git a/docs/en/user_guides/useful_tools.md b/docs/en/user_guides/useful_tools.md
deleted file mode 100644
index fdd7d061d1..0000000000
--- a/docs/en/user_guides/useful_tools.md
+++ /dev/null
@@ -1,71 +0,0 @@
-# Tutorial 7: Useful tools
-
-We provide lots of useful tools under `tools/` directory.
-
-The structure of this guide is as follows:
-
-- [Tutorial 7: Useful tools](#tutorial-7-useful-tools)
-  - [Get the FLOPs and params](#get-the-flops-and-params)
-  - [Publish a model](#publish-a-model)
-  - [Print full config](#print-full-config)
-
-## Get the FLOPs and params
-
-We provide a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch) to compute the FLOPs and params of a given model.
-
-```shell
-python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
-```
-
-For example,
-
-```shell
-python tools/analysis_tools/get_flops.py configs/resotorer/srresnet.py --shape 40 40
-```
-
-You will get the result like this.
-
-```
-==============================
-Input shape: (3, 40, 40)
-Flops: 4.07 GMac
-Params: 1.52 M
-==============================
-```
-
-**Note**: This tool is still experimental and we do not guarantee that the number is correct. You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
-
-(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 250, 250).
-(2) Some operators are not counted in FLOPs like GN and custom operators.
-You can add support for new operators by modifying [`mmcv/cnn/utils/flops_counter.py`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py).
-
-## Publish a model
-
-Before you upload a model to AWS, you may want to
-
-1. convert model weights to CPU tensors
-2. delete the optimizer states and
-3. compute the hash of the checkpoint file and append time and the hash id to the
-   filename.
-
-```shell
-python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
-```
-
-E.g.,
-
-```shell
-python tools/model_converters/publish_model.py work_dirs/stylegan2/latest.pth stylegan2_c2_8xb4_ffhq-1024x1024.pth
-```
-
-The final output filename will be `stylegan2_c2_8xb4_ffhq-1024x1024_{time}-{hash id}.pth`.
-
-## Print full config
-
-MMGeneration incorporates config mechanism to set parameters used for training and testing models. With our [config](../user_guides/config.md) mechanism, users can easily conduct extensive experiments without hard coding. If you wish to inspect the config file, you may run `python tools/misc/print_config.py /PATH/TO/CONFIG` to see the complete config.
-
-An Example:
-
-```shell
-python tools/misc/print_config.py configs/styleganv2/stylegan2_c2-PL_8xb4-fp16-partial-GD-no-scaler-800kiters_ffhq-256x256.py
-```
diff --git a/docs/en/user_guides/visualization.md b/docs/en/user_guides/visualization.md
deleted file mode 100644
index 55b7963316..0000000000
--- a/docs/en/user_guides/visualization.md
+++ /dev/null
@@ -1,337 +0,0 @@
-# Tutorial 6: Visualization
-
-The visualization of images is an important way to measure the quality of image processing, editing and synthesis.
-Using `visualizer` in config file can save visual results when training or testing. You can follow [MMEngine Documents](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) to learn the usage of visualization. MMagic provides a rich set of visualization functions.
-In this tutorial, we introduce the usage of the visualization functions provided by MMagic.
-
-- [Tutorial 6: Visualization](#tutorial-6-visualization)
-  - [Overview](#overview)
-    - [Visualization configuration of GANs](#visualization-configuration-of-gans)
-    - [Visualization configuration of image translation models](#visualization-configuration-of-image-translation-models)
-    - [Visualization configuration of diffusion models](#visualization-configuration-of-diffusion-models)
-    - [Visualization configuration of inpainting models](#visualization-configuration-of-inpainting-models)
-    - [Visualization configuration of matting models](#visualization-configuration-of-matting-models)
-    - [Visualization configuration of SISR/VSR/VFI models](#visualization-configuration-of-sisrvsrvfi-models)
-  - [Visualization Hook](#visualization-hook)
-  - [Visualizer](#visualizer)
-  - [VisBackend](#visbackend)
-    - [Use Different Storage Backends](#use-different-storage-backends)
-
-## Overview
-
-It is recommended to learn the basic concept of visualization in design documentation.
-
-In MMagic, the visualization of the training or testing process requires the configuration of three components: `VisualizationHook`, `Visualizer`, and `VisBackend`, The diagram below shows the relationship between Visualizer and VisBackend,
-
-<div align="center">
-<img src="https://user-images.githubusercontent.com/17425982/163327736-f7cb3b16-ef07-46bc-982a-3cc7495e6c82.png" width="800" />
-</div>
-
-**VisualizationHook** fetches the visualization results of the model output in fixed intervals during training and passes them to Visualizer.
-**Visualizer** is responsible for converting the original visualization results into the desired type (png, gif, etc.) and then transferring them to **VisBackend** for storage or display.
-
-### Visualization configuration of GANs
-
-For GAN models, such as StyleGAN and SAGAN, a usual configuration is shown below:
-
-```python
-# VisualizationHook
-custom_hooks = [
-    dict(
-        type='VisualizationHook',
-        interval=5000,  # visualization interval
-        fixed_input=True,  # whether use fixed noise input to generate images
-        vis_kwargs_list=dict(type='GAN', name='fake_img')  # pre-defined visualization arguments for GAN models
-    )
-]
-# VisBackend
-vis_backends = [
-    dict(type='VisBackend'),  # vis_backend for saving images to file system
-    dict(type='WandbVisBackend',  # vis_backend for uploading images to Wandb
-        init_kwargs=dict(
-            project='MMagic',   # project name for Wandb
-            name='GAN-Visualization-Demo'  # name of the experiment for Wandb
-        ))
-]
-# Visualizer
-visualizer = dict(type='Visualizer', vis_backends=vis_backends)
-```
-
-If you apply Exponential Moving Average (EMA) to a generator and want to visualize the EMA model, you can modify config of `VisualizationHook` as below:
-
-```python
-custom_hooks = [
-    dict(
-        type='VisualizationHook',
-        interval=5000,
-        fixed_input=True,
-        # vis ema and orig in `fake_img` at the same time
-        vis_kwargs_list=dict(
-            type='Noise',
-            name='fake_img',  # save images with prefix `fake_img`
-            sample_model='ema/orig',  # specified kwargs for `NoiseSampler`
-            target_keys=['ema.fake_img', 'orig.fake_img']  # specific key to visualization
-        ))
-]
-```
-
-### Visualization configuration of image translation models
-
-For Translation models, such as CycleGAN and Pix2Pix, visualization configs can be formed as below:
-
-```python
-# VisualizationHook
-custom_hooks = [
-    dict(
-        type='VisualizationHook',
-        interval=5000,
-        fixed_input=True,
-        vis_kwargs_list=[
-            dict(
-                type='Translation',  # Visualize results on the training set
-                name='trans'),  #  save images with prefix `trans`
-            dict(
-                type='Translationval',  # Visualize results on the validation set
-                name='trans_val'),  #  save images with prefix `trans_val`
-        ])
-]
-# VisBackend
-vis_backends = [
-    dict(type='VisBackend'),  # vis_backend for saving images to file system
-    dict(type='WandbVisBackend',  # vis_backend for uploading images to Wandb
-        init_kwargs=dict(
-            project='MMagic',   # project name for Wandb
-            name='Translation-Visualization-Demo'  # name of the experiment for Wandb
-        ))
-]
-# Visualizer
-visualizer = dict(type='Visualizer', vis_backends=vis_backends)
-```
-
-### Visualization configuration of diffusion models
-
-For Diffusion models, such as Improved-DDPM, we can use the following configuration to visualize the denoising process through a gif:
-
-```python
-# VisualizationHook
-custom_hooks = [
-    dict(
-        type='VisualizationHook',
-        interval=5000,
-        fixed_input=True,
-        vis_kwargs_list=dict(type='DDPMDenoising'))  # pre-defined visualization argument for DDPM models
-]
-# VisBackend
-vis_backends = [
-    dict(type='VisBackend'),  # vis_backend for saving images to file system
-    dict(type='WandbVisBackend',  # vis_backend for uploading images to Wandb
-        init_kwargs=dict(
-            project='MMagic',   # project name for Wandb
-            name='Diffusion-Visualization-Demo'  # name of the experiment for Wandb
-        ))
-]
-# Visualizer
-visualizer = dict(type='Visualizer', vis_backends=vis_backends)
-```
-
-### Visualization configuration of inpainting models
-
-For inpainting models, such as AOT-GAN and Global&Local, a usual configuration is shown below:
-
-```python
-# VisBackend
-vis_backends = [dict(type='LocalVisBackend')]
-# Visualizer
-visualizer = dict(
-    type='ConcatImageVisualizer',
-    vis_backends=vis_backends,
-    fn_key='gt_path',
-    img_keys=['gt_img', 'input', 'pred_img'],
-    bgr2rgb=True)
-# VisualizationHook
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)]
-```
-
-### Visualization configuration of matting models
-
-For matting models, such as DIM and GCA, a usual configuration is shown below:
-
-```python
-# VisBackend
-vis_backends = [dict(type='LocalVisBackend')]
-# Visualizer
-visualizer = dict(
-    type='ConcatImageVisualizer',
-    vis_backends=vis_backends,
-    fn_key='trimap_path',
-    img_keys=['pred_alpha', 'trimap', 'gt_merged', 'gt_alpha'],
-    bgr2rgb=True)
-# VisualizationHook
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)]
-```
-
-### Visualization configuration of SISR/VSR/VFI models
-
-For SISR/VSR/VFI models, such as EDSR, EDVR and CAIN, a usual configuration is shown below:
-
-```python
-# VisBackend
-vis_backends = [dict(type='LocalVisBackend')]
-# Visualizer
-visualizer = dict(
-    type='ConcatImageVisualizer',
-    vis_backends=vis_backends,
-    fn_key='gt_path',
-    img_keys=['gt_img', 'input', 'pred_img'],
-    bgr2rgb=False)
-# VisualizationHook
-custom_hooks = [dict(type='BasicVisualizationHook', interval=1)]
-```
-
-The specific configuration of the `VisualizationHook`, `Visualizer` and `VisBackend` components are described below
-
-## Visualization Hook
-
-In MMagic, we use `BasicVisualizationHook` and `VisualizationHook` as `VisualizationHook`.
-`VisualizationHook` supports three following cases.
-
-(1) Modify `vis_kwargs_list` to visualize the output of the model under specific inputs , which is suitable for visualization of the generated results of GAN and translation results of Image-to-Image-Translation models under specific data input, etc. Below are two typical examples:
-
-```python
-# input as dict
-vis_kwargs_list = dict(
-    type='Noise',  # use 'Noise' sampler to generate model input
-    name='fake_img',  # define prefix of saved images
-)
-
-# input as list of dict
-vis_kwargs_list = [
-    dict(type='Arguments',  # use `Arguments` sampler to generate model input
-         name='arg_output',  # define prefix of saved images
-         vis_mode='gif',  # specific visualization mode as GIF
-         forward_kwargs=dict(forward_mode='sampling', sample_kwargs=dict(show_pbar=True))  # specific kwargs for `Arguments` sampler
-    ),
-    dict(type='Data',  # use `Data` sampler to feed data in dataloader to model as input
-         n_samples=36,  # specific how many samples want to generate
-         fixed_input=False,  # specific do not use fixed input for each visualization process
-    )
-]
-```
-
-`vis_kwargs_list` takes dict or list of dict as input. Each of dict must contain a `type` field indicating the **type of sampler** used to generate the model input, and each of the dict must also contain the keyword fields necessary for the sampler (e.g. `ArgumentSampler` requires that the argument dictionary contain `forward_kwargs`).
-
-> To be noted that, this content is checked by the corresponding sampler and is not restricted by `BasicVisualizationHook`.
-
-In addition, the other fields are generic fields (e.g. `n_samples`, `n_row`, `name`, `fixed_input`, etc.).
-If not passed in, the default values from the BasicVisualizationHook initialization will be used.
-
-For the convenience of users, MMagic has pre-defined visualization parameters for **GAN**, **Translation models**, **SinGAN** and **Diffusion models**, and users can directly use the predefined visualization methods by using the following configuration:
-
-```python
-vis_kwargs_list = dict(type='GAN')
-vis_kwargs_list = dict(type='SinGAN')
-vis_kwargs_list = dict(type='Translation')
-vis_kwargs_list = dict(type='TranslationVal')
-vis_kwargs_list = dict(type='TranslationTest')
-vis_kwargs_list = dict(type='DDPMDenoising')
-```
-
-## Visualizer
-
-In MMagic, we implement `ConcatImageVisualizer` and `Visualizer`, which inherit from `mmengine.Visualizer`.
-The base class of `Visualizer` is `ManagerMixin` and this makes `Visualizer` a globally unique object.
-After being instantiated, `Visualizer` can be called at anywhere of the code by `Visualizer.get_current_instance()`, as shown below:
-
-```python
-# configs
-vis_backends = [dict(type='VisBackend')]
-visualizer = dict(
-    type='Visualizer', vis_backends=vis_backends, name='visualizer')
-```
-
-```python
-# `get_instance()` is called for globally unique instantiation
-VISUALIZERS.build(cfg.visualizer)
-
-# Once instantiated by the above code, you can call the `get_current_instance` method at any location to get the visualizer
-visualizer = Visualizer.get_current_instance()
-```
-
-The core interface of `Visualizer` is `add_datasample`.
-Through this interface,
-This interface will call the corresponding drawing function according to the corresponding `vis_mode` to obtain the visualization result in `np.ndarray` type.
-Then `show` or `add_image` will be called to directly show the results or pass the visualization result to the predefined vis_backend.
-
-## VisBackend
-
-In general, users do not need to manipulate `VisBackend` objects, only when the current visualization storage can not meet the needs, users will want to manipulate the storage backend directly.
-MMagic supports a variety of different visualization backends, including:
-
-- Basic VisBackend of MMEngine: including LocalVisBackend, TensorboardVisBackend and WandbVisBackend. You can follow [MMEngine Documents](https://github.com/open-mmlab/mmengine/blob/main/docs/en/advanced_tutorials/visualization.md) to learn more about them
-- VisBackend: Backend for **File System**. Save the visualization results to the corresponding position.
-- TensorboardVisBackend: Backend for **Tensorboard**. Send the visualization results to Tensorboard.
-- WandbVisBackend: Backend for **Wandb**. Send the visualization results to Tensorboard.
-
-One `Visualizer` object can have access to any number of VisBackends and users can access to the backend by their class name in their code.
-
-```python
-# configs
-vis_backends = [dict(type='Visualizer'), dict(type='WandbVisBackend')]
-visualizer = dict(
-    type='Visualizer', vis_backends=vis_backends, name='visualizer')
-```
-
-```python
-# code
-VISUALIZERS.build(cfg.visualizer)
-visualizer = Visualizer.get_current_instance()
-
-# access to the backend by class name
-gen_vis_backend = visualizer.get_backend('VisBackend')
-gen_wandb_vis_backend = visualizer.get_backend('GenWandbVisBackend')
-```
-
-When there are multiply VisBackend with the same class name, user must specific name for each VisBackend.
-
-```python
-# configs
-vis_backends = [
-    dict(type='VisBackend', name='gen_vis_backend_1'),
-    dict(type='VisBackend', name='gen_vis_backend_2')
-]
-visualizer = dict(
-    type='Visualizer', vis_backends=vis_backends, name='visualizer')
-```
-
-```python
-# code
-VISUALIZERS.build(cfg.visualizer)
-visualizer = Visualizer.get_current_instance()
-
-local_vis_backend_1 = visualizer.get_backend('gen_vis_backend_1')
-local_vis_backend_2 = visualizer.get_backend('gen_vis_backend_2')
-```
-
-### Visualize by Different Storage Backends
-
-If you want to use a different backend (Wandb, Tensorboard, or a custom backend with a remote window), just change the `vis_backends` in the config, as follows:
-
-**Local**
-
-```python
-vis_backends = [dict(type='LocalVisBackend')]
-```
-
-**Tensorboard**
-
-```python
-vis_backends = [dict(type='TensorboardVisBackend')]
-visualizer = dict(
-    type='ConcatImageVisualizer', vis_backends=vis_backends, name='visualizer')
-```
-
-```python
-vis_backends = [dict(type='WandbVisBackend')]
-visualizer = dict(
-    type='ConcatImageVisualizer', vis_backends=vis_backends, name='visualizer')
-```
diff --git a/docs/zh_cn/faq.md b/docs/faq.md
similarity index 100%
rename from docs/zh_cn/faq.md
rename to docs/faq.md
diff --git a/docs/zh_cn/get_started/install.md b/docs/get_started/install.md
similarity index 100%
rename from docs/zh_cn/get_started/install.md
rename to docs/get_started/install.md
diff --git a/docs/zh_cn/get_started/overview.md b/docs/get_started/overview.md
similarity index 100%
rename from docs/zh_cn/get_started/overview.md
rename to docs/get_started/overview.md
diff --git a/docs/zh_cn/get_started/quick_run.md b/docs/get_started/quick_run.md
similarity index 100%
rename from docs/zh_cn/get_started/quick_run.md
rename to docs/get_started/quick_run.md
diff --git a/docs/zh_cn/howto/dataset.md b/docs/howto/dataset.md
similarity index 100%
rename from docs/zh_cn/howto/dataset.md
rename to docs/howto/dataset.md
diff --git a/docs/zh_cn/howto/losses.md b/docs/howto/losses.md
similarity index 100%
rename from docs/zh_cn/howto/losses.md
rename to docs/howto/losses.md
diff --git a/docs/zh_cn/howto/models.md b/docs/howto/models.md
similarity index 100%
rename from docs/zh_cn/howto/models.md
rename to docs/howto/models.md
diff --git a/docs/zh_cn/howto/transforms.md b/docs/howto/transforms.md
similarity index 100%
rename from docs/zh_cn/howto/transforms.md
rename to docs/howto/transforms.md
diff --git a/README_zh-CN.md b/docs/index.md
similarity index 97%
rename from README_zh-CN.md
rename to docs/index.md
index 17f0c7188c..5dc09a09a3 100644
--- a/README_zh-CN.md
+++ b/docs/index.md
@@ -121,14 +121,20 @@
 
 ## 📄 目录
 
+- [� 最新进展 ](#-最新进展-)
+  - [最新的 **MMagic v1.1.0** 版本已经在 \[22/09/2023\] 发布:](#最新的-mmagic-v110-版本已经在-22092023-发布)
+- [📄 目录](#-目录)
 - [📖 介绍](#-介绍)
+  - [✨ 主要特性](#-主要特性)
+  - [✨ 最佳实践](#-最佳实践)
 - [🙌 参与贡献](#-参与贡献)
-- [🛠️ 安装](#%EF%B8%8F-安装)
+- [🛠️ 安装](#️-安装)
 - [📊 模型库](#-模型库)
 - [🤝 致谢](#-致谢)
-- [🖊️ 引用](#%EF%B8%8F-引用)
+- [🖊️ 引用](#️-引用)
 - [🎫 许可证](#-许可证)
-- [🏗️ ️OpenMMLab 的其他项目](#%EF%B8%8F-️openmmlab-的其他项目)
+- [🏗️ ️OpenMMLab 的其他项目](#️-️openmmlab-的其他项目)
+- [欢迎加入 OpenMMLab 社区](#欢迎加入-openmmlab-社区)
 
 ## 📖 介绍
 
@@ -491,4 +497,4 @@ MMagic 是一款由不同学校和公司共同贡献的开源项目。我们感
 - 🏃 获取更高效的问题答疑和意见反馈
 - 🔥 提供与各行各业开发者充分交流的平台
 
-干货满满 📘,等你来撩 💗,OpenMMLab 社区期待您的加入 👬
+干货满满 📘,等你来撩 💗,OpenMMLab 社区期待您的加入 👬
\ No newline at end of file
diff --git a/docs/zh_cn/migration/amp.md b/docs/migration/amp.md
similarity index 100%
rename from docs/zh_cn/migration/amp.md
rename to docs/migration/amp.md
diff --git a/docs/zh_cn/migration/data.md b/docs/migration/data.md
similarity index 100%
rename from docs/zh_cn/migration/data.md
rename to docs/migration/data.md
diff --git a/docs/zh_cn/migration/distributed_train.md b/docs/migration/distributed_train.md
similarity index 100%
rename from docs/zh_cn/migration/distributed_train.md
rename to docs/migration/distributed_train.md
diff --git a/docs/zh_cn/migration/eval_test.md b/docs/migration/eval_test.md
similarity index 100%
rename from docs/zh_cn/migration/eval_test.md
rename to docs/migration/eval_test.md
diff --git a/docs/zh_cn/migration/models.md b/docs/migration/models.md
similarity index 100%
rename from docs/zh_cn/migration/models.md
rename to docs/migration/models.md
diff --git a/docs/zh_cn/migration/optimizers.md b/docs/migration/optimizers.md
similarity index 100%
rename from docs/zh_cn/migration/optimizers.md
rename to docs/migration/optimizers.md
diff --git a/docs/zh_cn/migration/overview.md b/docs/migration/overview.md
similarity index 100%
rename from docs/zh_cn/migration/overview.md
rename to docs/migration/overview.md
diff --git a/docs/zh_cn/migration/runtime.md b/docs/migration/runtime.md
similarity index 100%
rename from docs/zh_cn/migration/runtime.md
rename to docs/migration/runtime.md
diff --git a/docs/zh_cn/migration/schedule.md b/docs/migration/schedule.md
similarity index 100%
rename from docs/zh_cn/migration/schedule.md
rename to docs/migration/schedule.md
diff --git a/docs/zh_cn/migration/visualization.md b/docs/migration/visualization.md
similarity index 100%
rename from docs/zh_cn/migration/visualization.md
rename to docs/migration/visualization.md
diff --git a/docs/zh_cn/user_guides/config.md b/docs/user_guides/config.md
similarity index 100%
rename from docs/zh_cn/user_guides/config.md
rename to docs/user_guides/config.md
diff --git a/docs/zh_cn/user_guides/dataset_prepare.md b/docs/user_guides/dataset_prepare.md
similarity index 100%
rename from docs/zh_cn/user_guides/dataset_prepare.md
rename to docs/user_guides/dataset_prepare.md
diff --git a/docs/zh_cn/user_guides/deploy.md b/docs/user_guides/deploy.md
similarity index 100%
rename from docs/zh_cn/user_guides/deploy.md
rename to docs/user_guides/deploy.md
diff --git a/docs/zh_cn/user_guides/index.rst b/docs/user_guides/index.rst
similarity index 100%
rename from docs/zh_cn/user_guides/index.rst
rename to docs/user_guides/index.rst
diff --git a/docs/zh_cn/user_guides/inference.md b/docs/user_guides/inference.md
similarity index 100%
rename from docs/zh_cn/user_guides/inference.md
rename to docs/user_guides/inference.md
diff --git a/docs/zh_cn/user_guides/metrics.md b/docs/user_guides/metrics.md
similarity index 100%
rename from docs/zh_cn/user_guides/metrics.md
rename to docs/user_guides/metrics.md
diff --git a/docs/zh_cn/user_guides/train_test.md b/docs/user_guides/train_test.md
similarity index 100%
rename from docs/zh_cn/user_guides/train_test.md
rename to docs/user_guides/train_test.md
diff --git a/docs/zh_cn/user_guides/useful_tools.md b/docs/user_guides/useful_tools.md
similarity index 100%
rename from docs/zh_cn/user_guides/useful_tools.md
rename to docs/user_guides/useful_tools.md
diff --git a/docs/zh_cn/user_guides/visualization.md b/docs/user_guides/visualization.md
similarity index 100%
rename from docs/zh_cn/user_guides/visualization.md
rename to docs/user_guides/visualization.md
diff --git a/docs/zh_cn/.dev_scripts/update_dataset_zoo.py b/docs/zh_cn/.dev_scripts/update_dataset_zoo.py
deleted file mode 100644
index 3383a0187c..0000000000
--- a/docs/zh_cn/.dev_scripts/update_dataset_zoo.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-
-from tqdm import tqdm
-
-
-def update_dataset_zoo():
-
-    target_dir = 'dataset_zoo'
-    source_dir = '../../tools/dataset_converters'
-    os.makedirs(target_dir, exist_ok=True)
-
-    # generate overview
-    overviewmsg = """
-# 概览
-
-"""
-
-    # generate index.rst
-    rstmsg = """
-.. toctree::
-   :maxdepth: 1
-   :caption: Dataset Zoo
-
-   overview.md
-"""
-
-    subfolders = os.listdir(source_dir)
-    for subf in tqdm(subfolders, desc='update dataset zoo'):
-
-        target_subf = subf.replace('-', '_').lower()
-        target_readme = os.path.join(target_dir, target_subf + '.md')
-        source_readme = os.path.join(source_dir, subf, 'README_zh-CN.md')
-        if not os.path.exists(source_readme):
-            continue
-
-        overviewmsg += f'\n- [{subf}]({target_subf}.md)'
-        rstmsg += f'\n   {target_subf}.md'
-
-        # generate all tasks dataset_zoo
-        command = f'cat {source_readme} > {target_readme}'
-        os.popen(command)
-
-    with open(os.path.join(target_dir, 'overview.md'), 'w') as f:
-        f.write(overviewmsg)
-
-    with open(os.path.join(target_dir, 'index.rst'), 'w') as f:
-        f.write(rstmsg)
-
-
-if __name__ == '__main__':
-    update_dataset_zoo()
diff --git a/docs/zh_cn/.dev_scripts/update_model_zoo.py b/docs/zh_cn/.dev_scripts/update_model_zoo.py
deleted file mode 100755
index 7e62b0d8fe..0000000000
--- a/docs/zh_cn/.dev_scripts/update_model_zoo.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) OpenMMLab. All rights reserved.
-
-import functools as func
-import glob
-import os
-import os.path as osp
-import re
-from os.path import basename, dirname
-
-import numpy as np
-import titlecase
-from tqdm import tqdm
-
-github_link = 'https://github.com/open-mmlab/mmagic/blob/main/'
-
-
-def anchor(name):
-    return re.sub(r'-+', '-',
-                  re.sub(r'[^a-zA-Z0-9\+]', '-',
-                         name.strip().lower())).strip('-')
-
-
-def summarize(stats, name):
-    allpapers = func.reduce(lambda a, b: a.union(b),
-                            [p for p, _, _, _, _, _, _ in stats])
-    allconfigs = func.reduce(lambda a, b: a.union(b),
-                             [c for _, c, _, _, _, _, _ in stats])
-    allckpts = func.reduce(lambda a, b: a.union(b),
-                           [c for _, _, c, _, _, _, _ in stats])
-    alltasks = func.reduce(lambda a, b: a.union(b),
-                           [t for _, _, _, t, _, _, _ in stats])
-    task_desc = '\n'.join([
-        f"    - [{task}]({task.replace('-', '_').replace(' ', '_').lower()}.md)"  # noqa
-        for task in list(alltasks)
-    ])
-
-    # Overview
-    papertypes, papercounts = np.unique([t for t, _ in allpapers],
-                                        return_counts=True)
-    countstr = '\n'.join(
-        [f'   - {t}: {c}' for t, c in zip(papertypes, papercounts)])
-    countstr = '\n'.join([f'   - ALGORITHM: {len(stats)}'])
-
-    summary = f"""# {name}
-"""
-
-    if name != 'Overview':
-        summary += '\n## 概览'
-
-    summary += f"""
-* 预训练权重个数: {len(allckpts)}
-* 配置文件个数: {len(allconfigs)}
-* 论文个数: {len(allpapers)}
-{countstr}
-    """
-
-    if name == 'Overview':
-        summary += f"""
-* 任务:
-{task_desc}
-
-"""
-
-    return summary
-
-
-# Count algorithms
-def update_model_zoo():
-
-    target_dir = 'model_zoo'
-
-    os.makedirs(target_dir, exist_ok=True)
-
-    root_dir = dirname(dirname(dirname(dirname(osp.abspath(__file__)))))
-    files = sorted(glob.glob(osp.join(root_dir, 'configs/*/README_zh-CN.md')))
-    stats = []
-
-    for f in tqdm(files, desc='update model zoo'):
-        with open(f, 'r') as content_file:
-            content = content_file.read()
-
-        # title
-        title = content.split('\n')[0].replace('#', '')
-        year = title.split('\'')[-1].split(')')[0]
-
-        # count papers
-        papers = set(
-            (papertype,
-             titlecase.titlecase(paper.lower().strip()).replace('+', r'\+'))
-            for (papertype, paper) in re.findall(
-                r'<!--\s*\[([A-Z]*?)\]\s*-->\s*\n.*?\btitle\s*=\s*{(.*?)}',
-                content, re.DOTALL))
-
-        # paper links
-        revcontent = '\n'.join(list(reversed(content.splitlines())))
-        paperlinks = {}
-        for _, p in papers:
-            paper_link = osp.join(github_link, 'configs', basename(dirname(f)),
-                                  'README_zh-CN.md')
-            # print(p, paper_link)
-            paperlinks[p] = ' '.join(
-                (f'[⇨]({paper_link}#{anchor(paperlink)})'
-                 for paperlink in re.findall(
-                     rf'\btitle\s*=\s*{{\s*{p}\s*}}.*?\n## (.*?)\s*[,;]?\s*\n',
-                     revcontent, re.DOTALL | re.IGNORECASE)))
-            # print('   ', paperlinks[p])
-        paperlist = '\n'.join(
-            sorted(f'    - [{t}] {x} ({paperlinks[x]})' for t, x in papers))
-
-        # count configs
-        configs = set(x.lower().strip()
-                      for x in re.findall(r'/configs/.*?\.py', content))
-
-        # count ckpts
-        ckpts = list(
-            x.lower().strip()
-            for x in re.findall(r'\[model\]\(https\:\/\/.*\.pth', content))
-        ckpts.extend(
-            x.lower().strip()
-            for x in re.findall(r'\[ckpt\]\(https\:\/\/.*\.pth', content))
-        ckpts.extend(
-            x.lower().strip()
-            for x in re.findall(r'\[模型\]\(https\:\/\/.*\.pth', content))
-        ckpts.extend(
-            x.lower().strip()
-            for x in re.findall(r'\[权重\]\(https\:\/\/.*\.pth', content))
-        ckpts = set(ckpts)
-
-        # count tasks
-        task_desc = list(
-            set(x.lower().strip()
-                for x in re.findall(r'\*\*任务\*\*: .*', content)))
-        tasks = set()
-        if len(task_desc) > 0:
-            tasks = set(task_desc[0].split('**任务**: ')[1].split(', '))
-
-        statsmsg = f"""## {title}"""
-        if len(tasks) > 0:
-            statsmsg += f"\n* Tasks: {','.join(list(tasks))}"
-        statsmsg += f"""
-
-* 预训练权重个数: {len(ckpts)}
-* 配置文件个数: {len(configs)}
-* 论文个数: {len(papers)}
-{paperlist}
-
-"""
-        # * We should have: {len(glob.glob(osp.join(dirname(f), '*.py')))}
-        content = content.replace('# ', '## ')
-        stats.append((papers, configs, ckpts, tasks, year, statsmsg, content))
-
-    # overview
-    overview = summarize(stats, '概览')
-    with open(osp.join(target_dir, 'overview.md'), 'w') as f:
-        f.write(overview)
-
-    alltasks = func.reduce(lambda a, b: a.union(b),
-                           [t for _, _, _, t, _, _, _ in stats])
-
-    # index.rst
-    indexmsg = """
-.. toctree::
-   :maxdepth: 1
-   :caption: 模型库
-
-   overview.md
-"""
-
-    for task in alltasks:
-        task = task.replace(' ', '_').replace('-', '_').lower()
-        indexmsg += f'   {task}.md\n'
-
-    with open(osp.join(target_dir, 'index.rst'), 'w') as f:
-        f.write(indexmsg)
-
-    #  task-specific
-    for task in alltasks:
-        filtered_model = [
-            (paper, config, ckpt, tasks, year, x, content)
-            for paper, config, ckpt, tasks, year, x, content in stats
-            if task in tasks
-        ]
-        filtered_model = sorted(filtered_model, key=lambda x: x[-3])[::-1]
-        overview = summarize(filtered_model, task)
-
-        msglist = '\n'.join(x for _, _, _, _, _, _, x in filtered_model)
-        task = task.replace(' ', '_').replace('-', '_').lower()
-        with open(osp.join(target_dir, f'{task}.md'), 'w') as f:
-            f.write(overview + '\n' + msglist)
-
-
-if __name__ == '__main__':
-    update_model_zoo()
diff --git a/docs/zh_cn/.gitignore b/docs/zh_cn/.gitignore
deleted file mode 100644
index 825514f423..0000000000
--- a/docs/zh_cn/.gitignore
+++ /dev/null
@@ -1,2 +0,0 @@
-model_zoo
-dataset_zoo
diff --git a/docs/zh_cn/Makefile b/docs/zh_cn/Makefile
deleted file mode 100644
index 56ae5906ce..0000000000
--- a/docs/zh_cn/Makefile
+++ /dev/null
@@ -1,23 +0,0 @@
-# Minimal makefile for Sphinx documentation
-#
-
-# You can set these variables from the command line, and also
-# from the environment for the first two.
-SPHINXOPTS    ?=
-SPHINXBUILD   ?= sphinx-build
-SOURCEDIR     = .
-BUILDDIR      = _build
-
-# Put it first so that "make" without argument is like "make help".
-help:
-	@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
-
-.PHONY: help Makefile
-
-# Catch-all target: route all unknown targets to Sphinx using the new
-# "make mode" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).
-%: Makefile
-	rm -rf _build
-	rm -rf model_zoo
-	rm -rf dataset_zoo
-	@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
diff --git a/docs/zh_cn/_static/css/readthedocs.css b/docs/zh_cn/_static/css/readthedocs.css
deleted file mode 100644
index 6905ec0805..0000000000
--- a/docs/zh_cn/_static/css/readthedocs.css
+++ /dev/null
@@ -1,6 +0,0 @@
-.header-logo {
-    background-image: url("../image/mmagic-logo.png");
-    background-size: 142px 46px;
-    height: 46px;
-    width: 142px;
-}
diff --git a/docs/zh_cn/_static/image/mmagic-logo.png b/docs/zh_cn/_static/image/mmagic-logo.png
deleted file mode 100644
index aefeff7ceb..0000000000
Binary files a/docs/zh_cn/_static/image/mmagic-logo.png and /dev/null differ
diff --git a/docs/zh_cn/_static/image/qq_group2_qrcode.jpg b/docs/zh_cn/_static/image/qq_group2_qrcode.jpg
deleted file mode 100644
index 7c6b04f561..0000000000
Binary files a/docs/zh_cn/_static/image/qq_group2_qrcode.jpg and /dev/null differ
diff --git a/docs/zh_cn/_static/image/zhihu_qrcode.jpg b/docs/zh_cn/_static/image/zhihu_qrcode.jpg
deleted file mode 100644
index c745fb027f..0000000000
Binary files a/docs/zh_cn/_static/image/zhihu_qrcode.jpg and /dev/null differ
diff --git a/docs/zh_cn/_static/resources/mmediting-demo.jpg b/docs/zh_cn/_static/resources/mmediting-demo.jpg
deleted file mode 100644
index 8891877766..0000000000
Binary files a/docs/zh_cn/_static/resources/mmediting-demo.jpg and /dev/null differ
diff --git a/docs/zh_cn/_static/resources/mmediting-logo.png b/docs/zh_cn/_static/resources/mmediting-logo.png
deleted file mode 100644
index 95b851a916..0000000000
Binary files a/docs/zh_cn/_static/resources/mmediting-logo.png and /dev/null differ
diff --git a/docs/zh_cn/_static/resources/qq_group_qrcode.jpg b/docs/zh_cn/_static/resources/qq_group_qrcode.jpg
deleted file mode 100644
index 417347449f..0000000000
Binary files a/docs/zh_cn/_static/resources/qq_group_qrcode.jpg and /dev/null differ
diff --git a/docs/zh_cn/_templates/404.html b/docs/zh_cn/_templates/404.html
deleted file mode 100644
index 931b4c8246..0000000000
--- a/docs/zh_cn/_templates/404.html
+++ /dev/null
@@ -1,16 +0,0 @@
-{% extends "layout.html" %}
-
-{% block body %}
-
-<h1>未找到页面</h1>
-<p>
-  未找到你要打开的页面。
-</p>
-<p>
-  如果你是从旧版本文档跳转至此,可能是对应的页面被移动了。请从左侧的目录中寻找新版本文档,或者跳转至<a href="{{ pathto(root_doc) }}">首页</a>。
-</p>
-<p>
-  如果你找不到希望打开的文档,欢迎在 <a href="https://github.com/open-mmlab/mmagic/issues/new/choose">Issue</a> 中告诉我们!
-</p>
-
-{% endblock %}
diff --git a/docs/zh_cn/conf.py b/docs/zh_cn/conf.py
deleted file mode 100644
index 10366c3d40..0000000000
--- a/docs/zh_cn/conf.py
+++ /dev/null
@@ -1,159 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-# Configuration file for the Sphinx documentation builder.
-#
-# This file only contains a selection of the most common options. For a full
-# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
-
-# -- Path setup --------------------------------------------------------------
-
-# If extensions (or modules to document with autodoc) are in another directory,
-# add these directories to sys.path here. If the directory is relative to the
-# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import subprocess
-import sys
-
-import pytorch_sphinx_theme
-
-sys.path.insert(0, os.path.abspath('../..'))
-
-# -- Project information -----------------------------------------------------
-
-project = 'MMagic'
-copyright = '2023, MMagic Authors'
-author = 'MMagic Authors'
-
-# -- General configuration ---------------------------------------------------
-
-# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
-extensions = [
-    'sphinx.ext.intersphinx',
-    'sphinx.ext.napoleon',
-    'sphinx.ext.viewcode',
-    'sphinx.ext.autosectionlabel',
-    'sphinx_markdown_tables',
-    'sphinx_copybutton',
-    'sphinx_tabs.tabs',
-    'myst_parser',
-]
-
-extensions.append('notfound.extension')  # enable customizing not-found page
-
-extensions.append('autoapi.extension')
-autoapi_type = 'python'
-autoapi_dirs = ['../../mmagic']
-autoapi_add_toctree_entry = False
-autoapi_template_dir = '_templates'
-# autoapi_options = ['members', 'undoc-members', 'show-module-summary']
-
-# # Core library for html generation from docstrings
-# extensions.append('sphinx.ext.autodoc')
-# extensions.append('sphinx.ext.autodoc.typehints')
-# # Enable 'expensive' imports for sphinx_autodoc_typehints
-# set_type_checking_flag = True
-# # Sphinx-native method. Not as good as sphinx_autodoc_typehints
-# autodoc_typehints = "description"
-
-# extensions.append('sphinx.ext.autosummary') # Create neat summary tables
-# autosummary_generate = True  # Turn on sphinx.ext.autosummary
-# # Add __init__ doc (ie. params) to class summaries
-# autoclass_content = 'both'
-# autodoc_skip_member = []
-# # If no docstring, inherit from base class
-# autodoc_inherit_docstrings = True
-
-autodoc_mock_imports = [
-    'mmagic.version', 'mmcv._ext', 'mmcv.ops.ModulatedDeformConv2d',
-    'mmcv.ops.modulated_deform_conv2d', 'clip', 'resize_right', 'pandas'
-]
-
-source_suffix = {
-    '.rst': 'restructuredtext',
-    '.md': 'markdown',
-}
-
-# Ignore >>> when copying code
-copybutton_prompt_text = r'>>> |\.\.\. '
-copybutton_prompt_is_regexp = True
-
-# Add any paths that contain templates here, relative to this directory.
-templates_path = ['../en/_templates']
-
-# List of patterns, relative to source directory, that match files and
-# directories to ignore when looking for source files.
-# This pattern also affects html_static_path and html_extra_path.
-exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-
-# -- Options for HTML output -------------------------------------------------
-
-# The theme to use for HTML and HTML Help pages.  See the documentation for
-# a list of builtin themes.
-#
-# html_theme = 'sphinx_rtd_theme'
-html_theme = 'pytorch_sphinx_theme'
-html_theme_path = [pytorch_sphinx_theme.get_html_theme_path()]
-
-html_theme_options = {
-    # 'logo_url': 'https://mmocr.readthedocs.io/en/latest/',
-    'menu': [
-        {
-            'name': 'GitHub',
-            'url': 'https://github.com/open-mmlab/mmagic',
-        },
-        {
-            'name':
-            '版本',
-            'children': [
-                {
-                    'name': 'MMagic 1.x',
-                    'url': 'https://mmagic.readthedocs.io/zh_CN/latest/',
-                    'description': 'Main 分支文档'
-                },
-                {
-                    'name': 'MMEditing 0.x',
-                    'url': 'https://mmagic.readthedocs.io/zh_CN/0.x/',
-                    'description': '0.x 分支文档'
-                },
-            ],
-            'active':
-            True,
-        },
-    ],
-    'menu_lang':
-    'cn'
-}
-
-# Add any paths that contain custom static files (such as style sheets) here,
-# relative to this directory. They are copied after the builtin static files,
-# so a file named "default.css" will overwrite the builtin "default.css".
-html_static_path = ['_static']
-html_css_files = ['css/readthedocs.css']
-
-myst_enable_extensions = ['colon_fence']
-myst_heading_anchors = 3
-
-language = 'zh_CN'
-
-# The master toctree document.
-root_doc = 'index'
-notfound_template = '404.html'
-
-
-def builder_inited_handler(app):
-    subprocess.run(['python', './.dev_scripts/update_model_zoo.py'])
-    subprocess.run(['python', './.dev_scripts/update_dataset_zoo.py'])
-
-
-def skip_member(app, what, name, obj, skip, options):
-    if what == 'package' or what == 'module':
-        skip = True
-    return skip
-
-
-def setup(app):
-    app.connect('builder-inited', builder_inited_handler)
-    app.connect('autoapi-skip-member', skip_member)
diff --git a/docs/zh_cn/index.rst b/docs/zh_cn/index.rst
deleted file mode 100644
index 0b30fafd38..0000000000
--- a/docs/zh_cn/index.rst
+++ /dev/null
@@ -1,143 +0,0 @@
-欢迎来到 MMagic 的中文文档!
-=====================================
-
-您可以在页面左下角切换中英文文档。
-
-.. note::
-   目前英文版有更多的内容,欢迎加入我们一起提升中文文档!
-   您可以通过 issue,discussion 或者我们的社区群来联系我们!
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: MMagic 社区
-
-   community/contributing.md
-   community/projects.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 新手入门
-
-   概述 <get_started/overview.md>
-   安装 <get_started/install.md>
-   快速运行 <get_started/quick_run.md>
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 基础教程
-
-   user_guides/config.md
-   user_guides/dataset_prepare.md
-   user_guides/inference.md
-   user_guides/train_test.md
-   user_guides/metrics.md
-   user_guides/visualization.md
-   user_guides/useful_tools.md
-   user_guides/deploy.md
-
-.. toctree::
-   :maxdepth: 2
-   :caption: 进阶教程
-
-   advanced_guides/evaluator.md
-   advanced_guides/structures.md
-   advanced_guides/data_preprocessor.md
-   advanced_guides/data_flow.md
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 开发指南
-
-   howto/models.md
-   howto/dataset.md
-   howto/transforms.md
-   howto/losses.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 常见问题
-
-   faq.md
-
-
-.. toctree::
-   :maxdepth: 2
-   :caption: 模型库
-
-   model_zoo/index.rst
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 数据集库
-
-   dataset_zoo/index.rst
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 变更日志
-
-   changelog.md
-
-
-.. toctree::
-   :maxdepth: 2
-   :caption: 接口文档(英文)
-
-   mmagic/apis.inferencers <autoapi/mmagic/apis/inferencers/index.rst>
-   mmagic/structures <autoapi/mmagic/structures/index.rst>
-   mmagic/datasets <autoapi/mmagic/datasets/index.rst>
-   mmagic/datasets.transforms <autoapi/mmagic/datasets/transforms/index.rst>
-   mmagic/evaluation <autoapi/mmagic/evaluation/index.rst>
-   mmagic/visualization <autoapi/mmagic/visualization/index.rst>
-   mmagic/engine.hooks <autoapi/mmagic/engine/hooks/index.rst>
-   mmagic/engine.logging <autoapi/mmagic/engine/logging/index.rst>
-   mmagic/engine.optimizers <autoapi/mmagic/engine/optimizers/index.rst>
-   mmagic/engine.runner <autoapi/mmagic/engine/runner/index.rst>
-   mmagic/engine.schedulers <autoapi/mmagic/engine/schedulers/index.rst>
-   mmagic/models.archs <autoapi/mmagic/models/archs/index.rst>
-   mmagic/models.base_models <autoapi/mmagic/models/base_models/index.rst>
-   mmagic/models.losses <autoapi/mmagic/models/losses/index.rst>
-   mmagic/models.data_preprocessors <autoapi/mmagic/models/data_preprocessors/index.rst>
-   mmagic/models.utils <autoapi/mmagic/models/losses/utils.rst>
-   mmagic/models.editors <autoapi/mmagic/models/editors/index.rst>
-   mmagic/utils <autoapi/mmagic/utils/index.rst>
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 迁移指南
-
-   migration/overview.md
-   migration/runtime.md
-   migration/models.md
-   migration/eval_test.md
-   migration/schedule.md
-   migration/data.md
-   migration/distributed_train.md
-   migration/optimizers.md
-   migration/visualization.md
-   migration/amp.md
-
-
-.. toctree::
-   :maxdepth: 1
-   :caption: 设备支持
-
-   device/npu_zh.md
-
-
-.. toctree::
-   :caption: 语言切换
-
-   switch_language.md
-
-Indices and tables
-==================
-
-* :ref:`genindex`
-* :ref:`modindex`
-* :ref:`search`
diff --git a/docs/zh_cn/make.bat b/docs/zh_cn/make.bat
deleted file mode 100644
index 8a3a0e25b4..0000000000
--- a/docs/zh_cn/make.bat
+++ /dev/null
@@ -1,36 +0,0 @@
-@ECHO OFF
-
-pushd %~dp0
-
-REM Command file for Sphinx documentation
-
-if "%SPHINXBUILD%" == "" (
-	set SPHINXBUILD=sphinx-build
-)
-set SOURCEDIR=.
-set BUILDDIR=_build
-
-if "%1" == "" goto help
-
-%SPHINXBUILD% >NUL 2>NUL
-if errorlevel 9009 (
-	echo.
-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
-	echo.installed, then set the SPHINXBUILD environment variable to point
-	echo.to the full path of the 'sphinx-build' executable. Alternatively you
-	echo.may add the Sphinx directory to PATH.
-	echo.
-	echo.If you don't have Sphinx installed, grab it from
-	echo.http://sphinx-doc.org/
-	exit /b 1
-)
-
-
-%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-goto end
-
-:help
-%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
-
-:end
-popd
diff --git a/docs/zh_cn/stat.py b/docs/zh_cn/stat.py
deleted file mode 100755
index 3513c3c8c9..0000000000
--- a/docs/zh_cn/stat.py
+++ /dev/null
@@ -1,178 +0,0 @@
-#!/usr/bin/env python
-# Copyright (c) OpenMMLab. All rights reserved.
-import functools as func
-import glob
-import re
-from os.path import basename, splitext
-
-import numpy as np
-import titlecase
-
-
-def anchor(name):
-    return re.sub(r'-+', '-',
-                  re.sub(r'[^a-zA-Z0-9\+]', '-',
-                         name.strip().lower())).strip('-')
-
-
-# Count algorithms
-
-files = sorted(glob.glob('*_models.md'))
-# files = sorted(glob.glob('docs/*_models.md'))
-
-stats = []
-
-for f in files:
-    with open(f, 'r') as content_file:
-        content = content_file.read()
-
-    # title
-    title = content.split('\n')[0].replace('#', '')
-
-    # count papers
-    papers = set(
-        (papertype,
-         titlecase.titlecase(paper.lower().strip()).replace('+', r'\+'))
-        for (papertype, paper) in re.findall(
-            r'<!--\s*\[([A-Z]*?)\]\s*-->\s*\n.*?\btitle\s*=\s*{(.*?)}',
-            content, re.DOTALL))
-    # paper links
-    revcontent = '\n'.join(list(reversed(content.splitlines())))
-    paperlinks = {}
-    for _, p in papers:
-        print(p)
-        paperlinks[p] = ' '.join(
-            (f'[⇨]({splitext(basename(f))[0]}.html#{anchor(paperlink)})'
-             for paperlink in re.findall(
-                 rf'\btitle\s*=\s*{{\s*{p}\s*}}.*?\n## (.*?)\s*[,;]?\s*\n',
-                 revcontent, re.DOTALL | re.IGNORECASE)))
-        print('   ', paperlinks[p])
-    paperlist = '\n'.join(
-        sorted(f'    - [{t}] {x} ({paperlinks[x]})' for t, x in papers))
-    # count configs
-    configs = set(x.lower().strip()
-                  for x in re.findall(r'https.*configs/.*\.py', content))
-
-    # count ckpts
-    ckpts = set(x.lower().strip()
-                for x in re.findall(r'https://download.*\.pth', content)
-                if 'mmedit' in x)
-
-    statsmsg = f"""
-## [{title}]({f})
-
-* 模型权重文件数量: {len(ckpts)}
-* 配置文件数量: {len(configs)}
-* 论文数量: {len(papers)}
-{paperlist}
-
-    """
-
-    stats.append((papers, configs, ckpts, statsmsg))
-
-allpapers = func.reduce(lambda a, b: a.union(b), [p for p, _, _, _ in stats])
-allconfigs = func.reduce(lambda a, b: a.union(b), [c for _, c, _, _ in stats])
-allckpts = func.reduce(lambda a, b: a.union(b), [c for _, _, c, _ in stats])
-
-# Summarize
-
-msglist = '\n'.join(x for _, _, _, x in stats)
-papertypes, papercounts = np.unique([t for t, _ in allpapers],
-                                    return_counts=True)
-countstr = '\n'.join(
-    [f'   - {t}: {c}' for t, c in zip(papertypes, papercounts)])
-
-modelzoo = f"""
-# 总览
-
-* 模型权重文件数量: {len(allckpts)}
-* 配置文件数量: {len(allconfigs)}
-* 论文数量: {len(allpapers)}
-{countstr}
-
-有关支持的数据集,请参阅 [数据集总览](datasets.md)。
-
-{msglist}
-
-"""
-
-with open('modelzoo.md', 'w') as f:
-    f.write(modelzoo)
-
-# Count datasets
-
-files = sorted(glob.glob('*_datasets.md'))
-
-datastats = []
-
-for f in files:
-    with open(f, 'r') as content_file:
-        content = content_file.read()
-
-    # title
-    title = content.split('\n')[0].replace('#', '')
-
-    # count papers
-    papers = set(
-        (papertype,
-         titlecase.titlecase(paper.lower().strip()).replace('+', r'\+'))
-        for (papertype, paper) in re.findall(
-            r'<!--\s*\[([A-Z]*?)\]\s*-->\s*\n.*?\btitle\s*=\s*{(.*?)}',
-            content, re.DOTALL))
-    # paper links
-    revcontent = '\n'.join(list(reversed(content.splitlines())))
-    paperlinks = {}
-    for _, p in papers:
-        print(p)
-        paperlinks[p] = ', '.join(
-            (f'[{p} ⇨]({splitext(basename(f))[0]}.html#{anchor(p)})'
-             for p in re.findall(
-                 rf'\btitle\s*=\s*{{\s*{p}\s*}}.*?\n## (.*?)\s*[,;]?\s*\n',
-                 revcontent, re.DOTALL | re.IGNORECASE)))
-        print('   ', paperlinks[p])
-    paperlist = '\n'.join(
-        sorted(f'    - [{t}] {x} ({paperlinks[x]})' for t, x in papers))
-    # count configs
-    configs = set(x.lower().strip()
-                  for x in re.findall(r'https.*configs/.*\.py', content))
-
-    # count ckpts
-    ckpts = set(x.lower().strip()
-                for x in re.findall(r'https://download.*\.pth', content)
-                if 'mmedit' in x)
-
-    statsmsg = f"""
-## [{title}]({f})
-
-* 论文数量: {len(papers)}
-{paperlist}
-
-    """
-
-    datastats.append((papers, configs, ckpts, statsmsg))
-
-alldatapapers = func.reduce(lambda a, b: a.union(b),
-                            [p for p, _, _, _ in datastats])
-
-# Summarize
-
-msglist = '\n'.join(x for _, _, _, x in stats)
-datamsglist = '\n'.join(x for _, _, _, x in datastats)
-papertypes, papercounts = np.unique([t for t, _ in alldatapapers],
-                                    return_counts=True)
-countstr = '\n'.join(
-    [f'   - {t}: {c}' for t, c in zip(papertypes, papercounts)])
-
-modelzoo = f"""
-# 总览
-
-* 论文数量: {len(alldatapapers)}
-{countstr}
-
-有关支持的算法, 可参见 [模型总览](modelzoo.md).
-
-{datamsglist}
-"""
-
-with open('datasets.md', 'w') as f:
-    f.write(modelzoo)
diff --git a/docs/zh_cn/switch_language.md b/docs/zh_cn/switch_language.md
deleted file mode 100644
index 145360f0ac..0000000000
--- a/docs/zh_cn/switch_language.md
+++ /dev/null
@@ -1,3 +0,0 @@
-## <a href='https://mmagic.readthedocs.io/en/latest/'>English</a>
-
-## <a href='https://mmagic.readthedocs.io/zh_CN/latest/'>简体中文</a>
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000000..43c8fea838
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,106 @@
+# 网站信息
+site_name: mmagic 文档教程 #
+site_url: https://eanyang7.github.io/mmagic # mkdocs serve ——》 Browser connected: http://127.0.0.1:8000/xxx/
+site_description: "mmagic" #
+site_author: Ean Yang
+# 仓库
+repo_url: https://github.com/EanYang7/mmagic #
+repo_name: github仓库
+
+theme:
+  name: material
+  language: zh
+  # 网站logo
+  logo: assets/logo.jpg #
+  # 网站favicon
+  favicon: assets/favicon.jpg #
+  # 设置右上角图标
+  icon:
+    repo: fontawesome/brands/github-alt
+    edit: material/pencil
+    view: material/eye
+  palette:
+    # 可选颜色见https://squidfunk.github.io/mkdocs-material/setup/changing-the-colors/#primary-color
+    # 切换为暗黑模式
+    - media: "(prefers-color-scheme: light)"
+      scheme: default
+      primary: red # 主色调
+      accent: deep purple # 强调色调
+      toggle:
+        icon: material/weather-sunny
+        name: 切换为暗黑模式
+
+    # 切换为浅色模式
+    - media: "(prefers-color-scheme: dark)"
+      scheme: slate
+      primary: deep purple
+      accent: red
+      toggle:
+        icon: material/weather-night
+        name: 切换为浅色模式
+  # 导航设置
+  features:
+    - navigation.instant
+    - navigation.instant.progress
+    - navigation.tracking
+    # - navigation.tabs #选项卡
+    # - navigation.tabs.sticky #选项卡固定,滚动时还可见
+    # - navigation.sections #章节视图,可以和tabs搭配使用
+    # - navigation.expand #展开侧边栏
+    - navigation.prune
+    # - navigation.indexes # 点击文件夹显示index.md的内容
+    - navigation.top # 回到顶部
+    - toc.follow # 目录跟随
+    - header.autohide # 滚动时隐藏顶部
+    - navigation.footer # 底部导航,上下页
+    - search.suggest # 搜索建议
+    - search.highlight # 搜索高亮
+    - search.share # 搜索分享
+    # 编辑和查看源码
+    - content.action.edit
+    - content.action.view
+    - content.code.copy
+# 编辑链接
+edit_uri: "tree/main/docs/"
+extra:
+  generator: false
+  # 社交链接
+  social:
+    - icon: fontawesome/brands/github
+      link: https://github.com/YQisme
+      name: github主页
+    - icon: fontawesome/brands/bilibili
+      link: https://space.bilibili.com/244185393?spm_id_from=333.788.0.0
+      name: b站主页
+    - icon: fontawesome/solid/person
+      link: https://eanyang7.com
+      name: 个人主页
+
+copyright: Copyright &copy; 2023 Ean Yang
+
+plugins:
+  - search:
+      separator: '[\s\u200b\-]'
+  - git-revision-date-localized:
+      enable_creation_date: true
+  - mkdocs-jupyter:
+      include_source: True
+      # ignore_h1_titles: True # 忽略一级标题(默认为False,会替换掉nav自定义的标题)
+
+markdown_extensions:
+  - toc:
+      permalink: ⚓︎
+  - pymdownx.tasklist:
+      custom_checkbox: true
+  - pymdownx.arithmatex:
+      generic: true
+  - admonition
+  - pymdownx.highlight
+  - pymdownx.superfences
+  - pymdownx.details
+
+# nav:
+#   - 主页: index.md
+#   - 示例:
+#     - 示例1: demo/demo1.md
+#     - 示例2: demo/demo2.md