Skip to content

Fanbenchao/Action-Recognition-based-on-pose-estimation

Repository files navigation

Action Recognition based on pose estimation

We repeat the results of the follwing CVPR 2018 paper:
2D/3D Pose Estimation and Action Recognition using Multitask Deep leaning

if you'd like to refer to the original codes, Please tap this link

our contribution is to supplement the original codes, such as the training process of the 3D pose estimation and action recognition etc.

Details

language: python3.6
frame: tensorflow 1.10+/keras 2.1.4

GPU: single GPU

Datasets

MPII dataset: We use this dataset train 2D pose estimation like the paper.
NTU RGB-D dataset: We use this dataset train 3D pose estimation and action recognition, due to cannot download Penn action and Human3.6 dataset.

Data Process and Visulization

data_generator/annotation_process.py: the process of mpii dataset.
data_generator/video_clip.ipynb: transform the videos in ntu dataset to rgb images.

data_generator/image_show.ipynb: draw 2D skeletons in images

data_generator/3D_pose_imgShow.ipynb: draw 2D skeletons in images and 3D skeletons spatial map.

Training Process

2D pose estimation: Just run pose estimation.ipynb.
3D pose estimation: Just run 3d_pose.ipynb.

Action reconition: Just run 3d_pose.ipynb.

Releases

No releases published

Packages

No packages published