This is a pytorch implementation of V2V-PoseNet(V2V-PoseNet: Voxel-to-Voxel Prediction Network for Accurate 3D Hand and Human Pose Estimation from a Single Depth Map) for ITOP dataset, which is largely based on the author's torch7 implementation and pytorch reimplementation of V2V-PoseNet for MSRA hand dataset(https://github.com/dragonbook/V2V-PoseNet-pytorch).
Download ITOP dataset and store it in /datasets/itop/
You won't need data for itop center.
Follow dragonbook's installation guide.
Begin with
python experiments/main.py
python experiments/draw_skeleton.py
This would produce a short video clip with estimated skeleton implemented.