Skip to content

yuchenli-sony/WildAvatar_Toolbox

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation

1Huazhong University of Science and Technology  2Nanyang Technological University
3Great Bay University  4Shanghai AI Laboratory

ArXiv Project Page Visitors

Environments

conda create -n wildavatar python=3.9
conda activate wildavatar
pip install -r requirements.txt
pip install pyopengl==3.1.4

Prepare Dataset

  1. Download WildAvatar.zip
  2. Put the WildAvatar.zip under ./data/WildAvatar/.
  3. Unzip WildAvatar.zip
  4. Install yt-dlp
  5. Download and Extract images from YouTube, by running
python prepare_data.py --ytdl ${PATH_TO_YT-DLP}$

Visualization

  1. Put the SMPL_NEUTRAL.pkl under ./assets/.
  2. Run the following script to visualize the smpl overlay of the human subject of ${youtube_ID}
python vis_smpl.py --subject "${youtube_ID}"
  1. The SMPL mask and overlay visualization can be found in data/WildAvatar/${youtube_ID}/smpl and data/WildAvatar/${youtube_ID}/smpl_masks

For example, if you run

python vis_smpl.py --subject "__-ChmS-8m8"

The SMPL mask and overlay visualization can be found in data/WildAvatar/__-ChmS-8m8/smpl and data/WildAvatar/__-ChmS-8m8/smpl_masks

Using WildAvatar

For training and testing on WildAvatar, we currently provide the adapted code for HumanNeRF and GauHuman.

Releases

No releases published

Packages

No packages published

Languages

  • Python 76.7%
  • Cuda 18.2%
  • C++ 4.4%
  • Other 0.7%