Skip to content

siyuzhu-fudan/champ

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance

1Nanjing University 2Fudan University 3Alibaba Group
*Equal Contribution +Corresponding Author

Framework

framework

Installation

  • System requirement: Ubuntu20.04
  • Tested GPUs: A100

Create conda environment:

  conda create -n champ python=3.10
  conda activate champ

Install packages with pip:

  pip install -r requirements.txt

Download pretrained models

  1. Download pretrained weight of base models:

  2. Download our checkpoints:
    Our checkpoints consists of denoising UNet, guidance encoders, Reference UNet, and motion module.

Finally, these pretrained models should be organized as follows:

./pretrained_models/
|-- champ
|   |-- denoising_unet.pth
|   |-- guidance_encoder_depth.pth
|   |-- guidance_encoder_dwpose.pth
|   |-- guidance_encoder_normal.pth
|   |-- guidance_encoder_semantic_map.pth
|   |-- reference_unet.pth
|   `-- motion_module.pth
|-- image_encoder
|   |-- config.json
|   `-- pytorch_model.bin
|-- sd-vae-ft-mse
|   |-- config.json
|   |-- diffusion_pytorch_model.bin
|   `-- diffusion_pytorch_model.safetensors
`-- stable-diffusion-v1-5
    |-- feature_extractor
    |   `-- preprocessor_config.json
    |-- model_index.json
    |-- unet
    |   |-- config.json
    |   `-- diffusion_pytorch_model.bin
    `-- v1-inference.yaml

Inference

We have provided several sets of example data for inference. Please first download and place them in the example_data folder. Here is the command for inference:

  python inference.py --config configs/inference.yanml

Animation results will be saved in results folder. You can change the reference image or the guidance motion by modifying inference.yaml. We will later provide the code for obtaining driving motion from in-the-wild videos.

Acknowledgements

We thank the authors of MagicAnimate, Animate Anyone, and AnimateDiff for their excellent work. Our project is built upon Moore-AnimateAnyone, and we are grateful for their open-source contributions.

Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{zhu2024champ,
    author = {Shenhao Zhu*, Junming Leo Chen*, Zuozhuo Dai, Yinghui Xu, Xun Cao, Yao Yao, Hao Zhu, Siyu Zhu},
    title = {Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance},
    booktile = {arxiv}
    year = {2024}
}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%