Domain adaptation of articulated pose estimation via synthetic pose prior

Published: 01 Jan 2017, Last Modified: 05 Mar 2025MVA 2017EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper proposes an articulated pose estimation method based on the pose prior for adaptation that is scene specific. In this research field, various approaches to estimate articulated human pose have been proposed and many researchers have tried to improve pose estimation accuracy for shared datasets. On the other hand, it is not common to use datasets without any labeling for realizing domain adaptation to different scenes even though it is urgently needed. Domain adaptation without labeled data is the key problem because when the user wants to estimate human pose captured in a specific scene, it is too costly to make a labeled dataset for training the estimator. We tackle this problem by proposing the novel approach of using synthetic pose prior, which is made by simulating probable poses in the target scene. We consider the likelihood of human pose distribution generated from various motion capture data sets and environmental knowledge about the scene in addition to training a basic appearance-based pose estimator by using a labeled dataset. Using the likelihood of joint positions specific to the scene, more appropriate results are yielded. In experiments, we adapt an estimator trained by images captured from various perspectives to suit the target images captured from a fixed perspective. As the proposed method takes advantage of the bias caused by fixed perspective, it offers improved estimation accuracy.
Loading