3D Human Pose Estimation From Multi Person Stereo 360 Scenes

Published: 01 Jan 2019, Last Modified: 13 Nov 2024CVPR Workshops 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper presents a human tracking and 3D pose estimation algorithm for use with a pair of 360 cameras. We identify and track an individual throughout complex, multi-person scenes in both indoor and outdoor environments using appearance models and positional data, and produce a temporally consistent 3D skeleton by optimising a skeleton of realistic joint lengths over joint positions produce by Convolutional Pose Machines (CPMs). Our results show an average improvement of 22.67% over state of the art deep learning approaches for tracking, as well as reasonable estimates for pose using just two cameras.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview