PIFu for the Real World: A Self-supervised Framework to Reconstruct Dressed Human from Single-View Images

Published: 01 Jan 2024, Last Modified: 13 Nov 2024CVM (1) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: It is very challenging to accurately reconstruct sophisticated human geometry caused by various poses and garments from a single image. Recently, works based on pixel-aligned implicit function (PIFu) have made a big step and achieved state-of-the-art fidelity on image-based 3D human digitization. However, the training of PIFu relies heavily on expensive and limited 3D ground truth data (i.e. synthetic data), thus hindering its generalization to more diverse real world images. In this work, we propose an end-to-end self-supervised network named SelfPIFu to utilize abundant and diverse in-the-wild images, resulting in largely improved reconstructions when tested on unconstrained in-the-wild images. At the core of SelfPIFu is the depth-guided volume-/surface-aware signed distance fields (SDF) learning, which enables self-supervised learning of a PIFu without access to GT mesh. The whole framework consists of a normal estimator, a depth estimator, and a SDF-based PIFu and better utilizes extra depth GT during training. Extensive experiments demonstrate the effectiveness of our self-supervised framework and the superiority of using depth as input. On synthetic data, our Intersection-Over-Union (IoU) achieves to 89.03%, 20% and 28.6% higher compared with PIFuHD and ECON, respectively. For in-the-wild images, our method excels at reconstructing geometric details that are both rich and highly representative of the actual human, as illustrated in Fig. 1 and 11.
Loading