Skip to content
/ PVCP Public

NeurIPS 2024 | πŸ€Έβ€β™‚οΈπŸ’₯πŸš—Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective

License

Notifications You must be signed in to change notification settings

wmj142326/PVCP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

29 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€Έβ€β™‚οΈπŸ’₯πŸš— Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective [video]

Dependencies

python = 3.7.16 torch = 1.11.0+cu113

conda create -n PVCP_env python=3.7
conda activate PVCP_env

# Please install PyTorch according to your CUDA version.
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt

Some of our code and dependencies was adapted from MotionBERT.

πŸ”— PVCP Dataset

PVCP We have provided a special Tool for SMPL annotation: SMPL_Tools.

Download the PVCP Dataset (β‰ˆ43G). Directory structure:

PVCP
β”œβ”€β”€ annotation
β”‚Β Β  β”œβ”€β”€ dataset_2dpose.json
β”‚Β Β  β”œβ”€β”€ dataset_mesh (coming soon).json
β”‚Β Β  β”œβ”€β”€ mb_input_det_pose.json
β”‚Β Β  β”œβ”€β”€ train_test_seq_id_list.json
β”‚Β Β  β”œβ”€β”€ mesh_det_pvcp_train_release (coming soon).pkl
β”‚Β Β  └── mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl
β”œβ”€β”€ frame
β”‚Β Β  └── image2frame.py
β”œβ”€β”€ image
β”‚Β Β  β”œβ”€β”€ S000_1280x720_F000000_T000000.png
β”‚Β Β  β”œβ”€β”€ S000_1280x720_F000001_T000001.png
β”‚Β Β  β”œβ”€β”€ S000_1280x720_F000002_T000002.png
β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  └── S208_1584x660_F000207_T042510.png
β”œβ”€β”€ video
β”‚Β Β  β”œβ”€β”€ S000_1280x720.mp4
β”‚Β Β  β”œβ”€β”€ S001_1280x720.mp4
β”‚Β Β  β”œβ”€β”€ S002_1280x720.mp4
β”‚Β Β  β”œβ”€β”€ ...
β”‚Β Β  └── S208_1584x660.mp4
└── vis_2dkpt_ann.mp4

For the frame folder, run image2frame.py. The folder structure is as follows:

β”œβ”€β”€ frame
Β Β  β”œβ”€β”€ frame_000000.png
Β Β  β”œβ”€β”€ frame_000001.png
Β Β  β”œβ”€β”€ frame_000002.png
Β Β  β”œβ”€β”€ ...
Β Β  └── frame_042510.png

🚩 Stay Tuned For:

  • We are working on more refined gesture labeling.
  • We will add more types of annotation information.
  • ...

πŸ•ΈοΈ PPSENet Framework

PPSENet

Project Directory Structure

PVCP
β”œβ”€β”€ checkpoint
β”œβ”€β”€ configs
β”‚   β”œβ”€β”€ mesh
β”‚   └── pretrain
β”œβ”€β”€ data
β”‚   β”œβ”€β”€ mesh
β”‚   └── pvcp
β”œβ”€β”€ lib
β”‚   β”œβ”€β”€ data
β”‚   β”œβ”€β”€ model
β”‚   └── utils
β”œβ”€β”€ params
β”œβ”€β”€ tools
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README_MotionBERT.md
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ train_mesh_pvcp.py
└── infer_wild_mesh_list.py

Data

  1. Download the other datasets here and put them to data/mesh/. We use Human3.6M, COCO, and PW3D for training and testing. Descriptions of the joint regressors could be found in SPIN.
  2. Download the SMPL model(basicModel_neutral_lbs_10_207_0_v1.0.0.pkl) from SMPLify, put it to data/mesh/, and rename it as SMPL_NEUTRAL.pkl
  3. Download the PVCP dataset and put them to data/pvcp/. mv mesh_det_pvcp_train_release.pkl and mesh_det_pvcp_train_gt2d_test_det2d.pkl to data/mesh/.
  • You can also skip the above steps and download our data (include PVCP Dataset) and checkpoint folders directly. Final, data directory structure as follows:
    β”œβ”€β”€ data
        β”œβ”€β”€ mesh
        β”‚   β”œβ”€β”€ J_regressor_extra.npy
        β”‚   β”œβ”€β”€ J_regressor_h36m_correct.npy
        β”‚   β”œβ”€β”€ mesh_det_coco.pkl
        β”‚   β”œβ”€β”€ mesh_det_h36m.pkl
        β”‚   β”œβ”€β”€ mesh_det_pvcp_train_gt2d_test_det2d.pkl
        β”‚   β”œβ”€β”€ mesh_det_pvcp_train_release.pkl
        β”‚   β”œβ”€β”€ mesh_det_pw3d.pkl
        β”‚   β”œβ”€β”€ mesh_hybrik.zip
        β”‚   β”œβ”€β”€ smpl_mean_params.npz
        β”‚   └── SMPL_NEUTRAL.pkl
        └── pvcp
            β”œβ”€β”€ annotation
            β”‚   β”œβ”€β”€ dataset_2dpose.json
            β”‚   β”œβ”€β”€ dataset_mesh (coming soon).json
            β”‚   β”œβ”€β”€ mb_input_det_pose.json
            β”‚   β”œβ”€β”€ train_test_seq_id_list.json
            β”‚   β”œβ”€β”€ mesh_det_pvcp_train_release (coming soon).pkl
            β”‚   └── mesh_det_pvcp_train_gt2d_test_det2d (coming soon).pkl
            β”œβ”€β”€ frame
            β”œβ”€β”€ image
            └── video
    

Train:

Finetune from a pretrained model with PVCP

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--pretrained checkpoint/pretrain/MB_release \
--checkpoint checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release

Evaluate

CUDA_VISIBLE_DEVICES=0,1,2,3 python train_mesh_pvcp.py \
--config configs/mesh/MB_ft_pvcp.yaml \
--evaluate checkpoint/mesh/ft_pvcp_iter3_class0.1_gt_release/best_epoch.bin 

Test and Demo

python infer_wild_mesh_list.py --out_path output/

πŸ‘€ Visual

PNG 1 PNG 2
GIF 1 GIF 2
GIF 3 GIF 4
GIF 5 GIF 6

Citation

@inproceedings{
     wang2024pedestriancentric,
     title={Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective},
     author={MeiJun Wang and Yu Meng and Zhongwei Qiu and Chao Zheng and Yan Xu and Xiaorui Peng and Jian Gao},
     booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
     year={2024},
     url={https://openreview.net/forum?id=ldvfaYzG35}
}

About

NeurIPS 2024 | πŸ€Έβ€β™‚οΈπŸ’₯πŸš—Pedestrian-Centric 3D Pre-collision Pose and Shape Estimation from Dashcam Perspective

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages