Non-corresponding and topology-free 3D face expression transfer

Published: 01 Jan 2024, Last Modified: 06 Nov 2024Vis. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Expression transfer is an important task in computer graphics and vision. Existing 3D face models constructed on registered meshes or shapes with corresponding vertices cannot transfer expression over practical data. While recent learning-based works achieved pose transfer between 3D unorganized point clouds, they cannot transfer 3D face expressions well because of weak geometry-perceiving ability and lack of ground truth expression faces for training. To solve the problems, we propose an effective framework that can transfer expressions on non-corresponding and topology-free 3D faces for the first time. The framework includes a novel autoencoder that directly processes unordered point clouds to extract identity and expression features and fuse them to generate desired target faces. Multiple geometry-perception operators are introduced to the autoencoder’s encoders to obtain 3D faces’ valuable geometry information without repetitive modulations in previous methods. Besides, our decoder utilizes cross-attention’s powerful interactive perception capability to fuse extracted features and deform target faces in feature space. To train the autoencoder in a supervised manner, we present a submodule that generates pseudo-ground truth expression faces using pre-trained deep models and their latent operations. The experiments demonstrate the proposed method’s outstanding 3D face expression transfer performances. Our code and data are available at https://github.com/SEULSH/Non-corresponding-and-Topology-free-3D-Face-Expression-Transfer.
Loading