A Pose-Aware Auto-Augmentation Framework for 3D Human Pose and Shape Estimation from Partial Point Clouds
Abstract: This work mainly addresses the challenges in 3D human pose and shape estimation from real partial point clouds. Existing 3D human estimation methods from point clouds usually have limited generalization ability on real data due to factors such as self-occlusion and random noise and domain gap between real data and synthetic data. In this paper, we propose a pose-aware auto-augmentation framework for 3D human pose and shape estimation from partial point clouds. Specifically, we design an occlusion-aware module for the estimator network that can obtain refined features to accurately regress human pose and shape parameters from partial point clouds, even if the point clouds are self-occlusive. Based on the pose parameters and global features of the point clouds from estimator network, we carefully design a learnable augmentor network that can intelligently drive and deform real data to enrich data diversity during the training of estimator network. To guide the augmentor network to generate challenging augmented samples, we adopt an adversarial learning strategy according to the error feedback of the estimator. The experimental results on real data and synthetic data demonstrate that the proposed approach can accurately estimate the 3D human pose and shape from partial point clouds and outperform prior works in terms of reconstruction accuracy.
Loading