This repository contains the PyTorch implementation of the paper SpecTrHuMS: Spectral Transformer for Human Mesh Sequence learning by Clément Lemeunier, Florence Denis, Guillaume Lavoué and Florent Dupont.
The code was tested on Linux with PyTorch 2.0.1, CUDA 11.7 and Python 3.10.
Create a conda environment and activate it:
conda create -n SpecTrHuMS python=3.10
conda activate SpecTrHuMS
Install PyTorch.
Install Human Body Prior: clone the repository, and execute commands in the Installation
chapter while the conda environment is activated (launch both commands even if there are errors).
Install requirements:
pip install -r requirements.txt
Download this archive and extract it to the root folder. It contains the SMPL connectivity with its eigenvectors computed with Matlab and a pretrained model corresponding to the SpecTrHuMS-MI (see paper). This should create two folders: data
and checkpoints
.
Go the MANO Download
webpage and download the Extended SMPL+H model
. Create a folder smplh
in the data
folder previously created, and extract the archive in the smplh
folder. Then, go the SMPL Download
webpage and download DMPLs compatible with SMPL
, create a folder dmpls
in the data
folder, and extract the archive in the dmpls
folder. This should give the following hierarchy:
data
┣ dmpls
┃ ┣ female
┃ ┃ ┗ model.npz
┃ ┣ male
┃ ┃ ┗ model.npz
┃ ┣ neutral
┃ ┃ ┗ model.npz
┣ smplh
┃ ┣ female
┃ ┃ ┗ model.npz
┃ ┣ male
┃ ┃ ┗ model.npz
┃ ┣ neutral
┃ ┃ ┗ model.npz
┣ evecs_4096.bin
┗ faces.bin
First, download the following AMASS datasets:
- CMU
- MPI_Limits
- TotalCapture
- Eyes_Japan_Dataset
- KIT
- EKUT
- TCD_handMocap
- ACCAD
- BioMotionLab_NTroje
and modify the variable amass_directory
in the file dataset_creation/default_options_dataset.json
to the path where you downloaded AMASS datasets.
Then, launch the following script:
python create_dataset.py
This will create a dataset made of multiple identities. The size of the created dataset is approximately 20Gb. You can use less frequencies in order to have a smaller dataset by modifying the value in the file dataset_creation/default_options_dataset.json
, but the dataset will not be usable by the given pretrained model and you will have to retrain a new one.
A pretrained model is available in the checkpoints/SpecTrHuMS-MI/
directory and corresponds to the Spectral Transformer for Human Mesh Sequence learning using Multiple Identities.
It is possible to:
- evaluate the model's scores on the test dataset:
python test.py --load_job_id=SpecTrHuMS-MI
. After computation, scroll up a bit in the console to view results: this allows to reproduce results in Table 3 lineSpecTrHuMS-MI
. - visualize its ability to predict the end of animations:
python visualize.py --load_job_id=SpecTrHuMS-MI
using aitviewer .
In order to train a model, execute the following command by specifying a job_id
:
python train.py --job_id=0
or
python train.py --job_id=SpecTrHuMS-test
Training is done using the framework PyTorch Lightning. It will create a new folder in the checkpoints/
directory and create logs in the corresponding folder, which you can visualize using the command tensorboard --logdir checkpoints/
.
An additional video file is provided (in the video
folder) in order to better visualise results.
This work was supported by the ANR project Human4D ANR-19-CE23-0020 and was granted access to the AI resources of IDRIS under the allocation 2023-AD011012424R2 made by GENCI.
This work is Copyright of University of Lyon, 2023. It is distributed under the Mozilla Public License v. 2.0. (refer to the accompanying file LICENSE-MPL2.txt or a copy at http://mozilla.org/MPL/2.0/).