Abstract: We propose a fast online video pose estimation method to detect and track human upper-body poses based on a conditional dynamic Bayesian modeling of pose modes without referring to future frames. Estimation of human body poses from video is an important task with many applications. Our method extends fast image-based pose estimation to live video streams by leveraging the temporal correlation of articulated poses between frames. Video pose estimation are inferred over a time window using a conditional dynamic Bayesian network (CDBN), which we term T-CDBN. Specifically, latent pose modes and their transitions are modeled and co-determined from the combination of three modules: (1) inference based on current observations, (2) the modeling of mode-to-mode transitions as a probabilistic prior, and (3) the modeling of state-to-mode transitions using a multi-mode softmax regression. Given the predicted pose modes, the body poses in terms of arm joint locations can then be determined more accurately and robustly. Our method is suitable to investigate high frame rate (HFR) scenarios, where pose mode transitions can effectively capture action-related priors to boost performance. We evaluate our method on a newly collected HFR- Pose dataset and four major video pose datasets (VideoPose2, TUM Kitchen, FLIC and Penn Action). Our method achieves improvements in both accuracy and efficiency over existing online video pose estimation methods.
0 Replies
Loading