Personalized face modeling for improved face reconstruction and motion retargeting

Published: 23 Aug 2020, Last Modified: 03 Mar 2026ECCV 2020EveryoneCC BY-SA 4.0
Abstract: Traditional methods for image-based 3D face reconstruction and facial motion retargeting fit a 3D morphable model (3DMM) to the face, which has limited modeling capacity and often fails to generalize to in-the-wild data. Approaches that use deformation transfer or a multilinear tensor as a personalized 3DMM for blendshape interpolation do not account for the fact that facial expressions produce different local and global skin deformations across individuals. Moreover, existing methods typically learn a single albedo per user, which is insufficient to capture expression-specific skin reflectance variations. We propose an end-to-end framework that jointly learns a personalized face model for each user and per-frame facial motion parameters from a large corpus of in-the-wild videos of user expressions. Specifically, we learn user-specific expression blendshapes and dynamic (expression-specific) albedo maps by predicting personalized corrections on top of a 3DMM prior. We introduce novel training constraints to ensure that the corrected blendshapes retain their semantic meaning and that the reconstructed geometry is disentangled from the albedo. Experimental results demonstrate that our personalization accurately captures fine-grained facial dynamics across diverse conditions and efficiently decouples the learned face model from facial motion, yielding more accurate face reconstruction and facial motion retargeting compared to state-of-the-art methods.
Loading