Detailed 3D Face Reconstruction in Full Pose Range

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Representation learning for computer vision, Detailed 3D face reconstruction, Large pose
TL;DR: Unlike the existing works, our method could reconstruct facial details well in large pose scenarios.
Abstract: Monocular detailed 3D face reconstruction aims to recover realistic 3D face from a single-face image. Although existing two-stage reconstruction methods have achieved great success, they are still hard to reconstruct accurate shapes and believable details for large pose images. The reason for the former is that the proportion of large pose data in their training set is often not high, resulting in a limited ability for coarse 3D face reconstruction. The latter is caused by the loss of face details in self-occluded areas of large pose images. In order to perform detailed 3D face reconstruction in full pose range, we respectively propose a self-augment mechanism and a self-supervised detail reconstruction method for large-pose images at the two stages. Specifically, in the first stage, the self augment mechanism generates a set of large pose data for each training image for re-learning. In the second stage, we pad the self-occluded side of the unwrapped input image according to the face symmetry prior, and design a Recursive Image-to-image Translation Network constrained by the details of input image to estimate its original details. By doing so, we could weaken the training set constraints on coarse 3D face reconstruction and reconstruct the believable face details of large pose images, enabling full pose range detailed 3D face reconstruction. Extensive experiments show that our method could achieve a level comparable to state-of-the-art methods.
Supplementary Material: pdf
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1817
Loading