Abstract: This work aims to address the problems of clothed human reconstruction from unseen partial point clouds. Existing methods focus on estimating vertex offsets on top of parametric models for clothing details but with the limitation of the fixed topology, or reconstructing non-parametric shapes using implicit functions which are lack of semantic information. Moreover, due to limited training data, large variations in clothing details, and domain gaps between training data and real data, these methods often have poor generalization ability on real data. In this paper, we propose a generalizable approach for estimating dressed human models from single-frame partial point clouds based on meta-learning. Specifically, we first learn a meta-model that can efficiently estimate the parameters of unclothed human models from unseen data by a fast fine-tuning. Based on the unclothed human models, we further meta-learn the point-based human models in clothing with local geometric features, which is topologically flexible and rich in human details. Our approach outperforms previous work in terms of reconstruction accuracy, as evidenced by qualitative and quantitative results obtained from a variety of datasets.
Loading