You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which demo code should I use if I want to do pose inference over a video stream? I see that there is one for webcam python demo/inferencer_demo.py webcam and one for image/video files but is there one for a video stream?
Basically, I want to get the keypoints in a streaming fashion. I used --pred-out-dir to get a json file with the keypoints but how does this work on a video stream? Do I have to continuously check for updates to this file?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Which demo code should I use if I want to do pose inference over a video stream? I see that there is one for webcam
python demo/inferencer_demo.py webcam
and one for image/video files but is there one for a video stream?Basically, I want to get the keypoints in a streaming fashion. I used
--pred-out-dir
to get a json file with the keypoints but how does this work on a video stream? Do I have to continuously check for updates to this file?Beta Was this translation helpful? Give feedback.
All reactions