Abstract: Face reenactment aims to transfer the expression and pose from a driving face to a source face. Great progress has been made on this task with the recent success of deep generative models. However, it remains challenging when the two faces hold a significant pose discrepancy. Identity change and image distortion arise as pose discrepancy increases. To tackle these problems, we propose to exploit multiple guidance derived from the 3D morphable face model (3DMM). Firstly, a precomputed optical flow is utilized to guide the estimation of motion fields. Secondly, a precomputed occlusion map is utilized to guide the perception of occluded areas. Finally, a rendered image is utilized to guide the restoration of missing contents. We present a new reenactment framework to integrate the above guidance and generate high-quality results. Extensive experiments show the superior performance of our framework compared with several state-of-the-art methods. Ablation studies demonstrate the effectiveness of exploiting multiple guidance from the 3DMM.
Submission Type: archival
Presentation Type: online
Presenter: Huayu Zhang
0 Replies
Loading