π€ΈββοΈπ₯π Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective [video]
conda create -n PVCP_env python=3.7
conda activate PVCP_env
# Please install PyTorch according to your CUDA version.
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
Some of our code and dependencies was adapted from MotionBERT.
We have provided a special Tool for SMPL annotation: SMPL_Tools.
Download the PVCP Dataset (β43G). Directory structure:
PVCP
βββ annotation
βΒ Β βββ dataset_2dpose.json
βΒ Β βββ dataset_mesh (coming soon).json
βΒ Β βββ mb_input_det_pose.json
βΒ Β βββ train_test_seq_id_list.json
βΒ Β βββ mesh_det_pvcp_train_release (coming soon).pkl
βΒ Β βββ mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl
βββ frame
βΒ Β βββ image2frame.py
βββ image
βΒ Β βββ S000_1280x720_F000000_T000000.png
βΒ Β βββ S000_1280x720_F000001_T000001.png
βΒ Β βββ S000_1280x720_F000002_T000002.png
βΒ Β βββ ...
βΒ Β βββ S208_1584x660_F000207_T042510.png
βββ video
βΒ Β βββ S000_1280x720.mp4
βΒ Β βββ S001_1280x720.mp4
βΒ Β βββ S002_1280x720.mp4
βΒ Β βββ ...
βΒ Β βββ S208_1584x660.mp4
βββ vis_2dkpt_ann.mp4
For the frame
folder, run image2frame.py
. The folder structure is as follows:
βββ frame
Β Β βββ frame_000000.png
Β Β βββ frame_000001.png
Β Β βββ frame_000002.png
Β Β βββ ...
Β Β βββ frame_042510.png
-
We are working on more refined gesture labeling.
-
We will add more types of annotation information.
-
...
PVCP
βββ checkpoint
βββ configs
β βββ mesh
β βββ pretrain
βββ data
β βββ mesh
β βββ pvcp
βββ lib
β βββ data
β βββ model
β βββ utils
βββ params
βββ tools
βββ LICENSE
βββ README_MotionBERT.md
βββ requirements.txt
βββ train_mesh_pvcp.py
βββ infer_wild_mesh_list.py
- Download the other datasets here and put them to
data/mesh/
. We use Human3.6M, COCO, and PW3D for training and testing. Descriptions of the joint regressors could be found in SPIN. - Download the SMPL model(
basicModel_neutral_lbs_10_207_0_v1.0.0.pkl
) from SMPLify, put it todata/mesh/
, and rename it asSMPL_NEUTRAL.pkl
- Download the
PVCP dataset
and put them todata/pvcp/
. mvmesh_det_pvcp_train_release.pkl
andmesh_det_pvcp_train_gt2d_test_det2d.pkl
todata/mesh/
.
- You can also skip the above steps and download our data (include PVCP Dataset) and checkpoint folders directly. Final,
data
directory structure as follows:βββ data βββ mesh β βββ J_regressor_extra.npy β βββ J_regressor_h36m_correct.npy β βββ mesh_det_coco.pkl β βββ mesh_det_h36m.pkl β βββ mesh_det_pvcp_train_gt2d_test_det2d.pkl β βββ mesh_det_pvcp_train_release.pkl β βββ mesh_det_pw3d.pkl β βββ mesh_hybrik.zip β βββ smpl_mean_params.npz β βββ SMPL_NEUTRAL.pkl βββ pvcp βββ annotation β βββ dataset_2dpose.json β βββ dataset_mesh (coming soon).json β βββ mb_input_det_pose.json β βββ train_test_seq_id_list.json β βββ mesh_det_pvcp_train_release (coming soon).pkl β βββ mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl βββ frame βββ image βββ video
Finetune from a pretrained model with PVCP
CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--pretrained checkpoint/pretrain/MB_release \
--checkpoint checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release
CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--evaluate checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release/best_epoch.bin
python infer_wild_mesh_list.py --out_path output/
@inproceedings{
wang2024pedestriancentric,
title={Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective},
author={MeiJun Wang and Yu Meng and Zhongwei Qiu and Chao Zheng and Yan Xu and Xiaorui Peng and Jian Gao},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=ldvfaYzG35}
}