Abstract: 3D morphable model (3DMM) performs favorably in 3D face reconstruction from a single image. However, the 3D face created by 3DMM lacks the fine detail of shape and texture. Existing works address this issue by exploiting a neural network that generates a displacement map for finer details. This way enhances the quality of the reconstructed face but increases the complexity because it utilizes a generative model. In addition, previous works reconstruct only the frontal part of the human face without the full head representation due to the use of the simple 3DMM model. They also neglect the facial-region-only constraint in doing texture extraction, which yields incorrect facial details. In this paper, we answer these challenges by proposing a practical framework that combines two major neural-network modules, i.e. DPMMNet and ResHairNet networks. In detail, we initially generate a coarse 3D face shape through the 3DMM fitting and mesh deformation. Then, we propose the DPMMNet, a network that estimates a displacement map from an RGB input image for producing detailed geometric information. Then, we craft the ResHairNet module, a neural network function that removes non-facial regions and fills them with artificial but plausible skin color and texture. Experimental results show that the proposed method reconstructs the 3D face and full head with a higher level of detail while also achieving approximately 12 times faster computation time than the previous method
0 Replies
Loading