- System requirement: Ubuntu20.04
- Tested GPUs: A100
Create conda environment:
conda create -n champ python=3.10
conda activate champ
Install packages with pip
:
pip install -r requirements.txt
-
Download pretrained weight of base models:
-
Download our checkpoints:
Our checkpoints consists of denoising UNet, guidance encoders, Reference UNet, and motion module.
Finally, these pretrained models should be organized as follows:
./pretrained_models/
|-- champ
| |-- denoising_unet.pth
| |-- guidance_encoder_depth.pth
| |-- guidance_encoder_dwpose.pth
| |-- guidance_encoder_normal.pth
| |-- guidance_encoder_semantic_map.pth
| |-- reference_unet.pth
| `-- motion_module.pth
|-- image_encoder
| |-- config.json
| `-- pytorch_model.bin
|-- sd-vae-ft-mse
| |-- config.json
| |-- diffusion_pytorch_model.bin
| `-- diffusion_pytorch_model.safetensors
`-- stable-diffusion-v1-5
|-- feature_extractor
| `-- preprocessor_config.json
|-- model_index.json
|-- unet
| |-- config.json
| `-- diffusion_pytorch_model.bin
`-- v1-inference.yaml
We have provided several sets of example data for inference. Please first download and place them in the example_data
folder.
Here is the command for inference:
python inference.py --config configs/inference.yanml
Animation results will be saved in results
folder. You can change the reference image or the guidance motion by modifying inference.yaml
. We will later provide the code for obtaining driving motion from in-the-wild videos.
We thank the authors of MagicAnimate, Animate Anyone, and AnimateDiff for their excellent work. Our project is built upon Moore-AnimateAnyone, and we are grateful for their open-source contributions.
If you find our work useful for your research, please consider citing the paper:
@inproceedings{zhu2024champ,
author = {Shenhao Zhu*, Junming Leo Chen*, Zuozhuo Dai, Yinghui Xu, Xun Cao, Yao Yao, Hao Zhu, Siyu Zhu},
title = {Champ: Controllable and Consistent Human Image Animation with 3D Parametric Guidance},
booktile = {arxiv}
year = {2024}
}
}