Skip to content

Latest commit

 

History

History
42 lines (33 loc) · 3.05 KB

DATASET.md

File metadata and controls

42 lines (33 loc) · 3.05 KB

Dataset

Training Data

We use AMASS, InstaVariety, MPI-INF-3DHP, Human3.6M, and 3DPW datasets for training. Please register to their websites to download and process the data. You can download parsed ViT version of InstaVariety, MPI-INF-3DHP, Human3.6M, and 3DPW data from the Google Drive. You can save the data under dataset/parsed_data folder.

Process AMASS dataset

After downloading AMASS dataset, you can process it by running:

python -m lib.data_utils.amass_utils

The processed data will be stored at dataset/parsed_data/amass.pth.

Process 3DPW, MPII3D, Human3.6M, and InstaVariety datasets

First, visit TCMR and download preprocessed data at `dataset/parsed_data/TCMR_preproc/'.

Next, prepare 2D keypoints detection using ViTPose and store the results at `dataset/detection_results/<DATAsET-NAME>/<SEQUENCE_NAME.npy>'. You may need to download all images to prepare the detection results.

For Human36M, MPII3D, and InstaVariety datasets, you need to also download NeuralAnnot pseudo groundtruth SMPL label. As mentioned in our paper, we do not supervise WHAM on this label, but use it for neural initialization step.

Finally, run following codes to preprocess all training data.

python -m lib.data_utils.threedpw_train_utils       # 3DPW dataset
# [Coming] python -m lib.data_utils.human36m_train_utils       # Human3.6M dataset
# [Coming] python -m lib.data_utils.mpii3d_train_utils         # MPI-INF-3DHP dataset
# [Coming] python -m lib.data_utils.insta_train_utils          # InstaVariety dataset

Process BEDLAM dataset

Will be updated.

Evaluation Data

We use 3DPW, RICH, and EMDB for the evaluation. We provide the parsed data for the evaluation. Please download the data from Google Drive and place them at dataset/parsed_data/.

To process the data at your end, please

  1. Download parsed 3DPW data from TCMR and store `dataset/parsed_data/TCMR_preproc/'.
  2. Run ViTPose on all test data and store the results at `dataset/detection_results/<DATAsET-NAME>'.
  3. Run following codes.
python -m lib.data_utils.threedpw_eval_utils --split <"val" or "test">      # 3DPW dataset
python -m lib.data_utils.emdb_eval_utils --split <"1" or "2">               # EMDB dataset
python -m lib.data_utils.rich_eval_utils                                    # RICH dataset