VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis

Published: 28 Aug 2025, Last Modified: 28 Aug 2025CV4A11yEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Talking Head Generation, 3D Controllable Head, 3D Gaussian Splatting, Visual Lip Recognition, Machine Learning
TL;DR: VisualSpeaker improves 3D facial animations by using 3D Gaussian Splatting and a visual speech recognition model to provide perceptual feedback on an avatar's lip movements.
Abstract: Realistic, high-fidelity 3D facial animations are essential for expressive avatars in human-computer interaction and accessibility. Although prior methods show promising quality, their reliance on the mesh domain limits their ability to fully leverage the rapid visual innovations seen in 2D computer vision and graphics. We propose VisualSpeaker, a novel method that bridges this gap using photorealistic differentiable rendering, supervised by visual speech recognition, for improved 3D facial animation. Our contribution is a perceptual lip-reading loss, derived by passing photorealistic 3D Gaussian Splatting avatar renders through a pre-trained Visual Automatic Speech Recognition model during training. Evaluation on the MEAD dataset demonstrates that VisualSpeaker improves both the standard Lip Vertex Error metric by 56.1% and the perceptual quality of the generated animations, while retaining the controllability of mesh-driven animation. This perceptual focus naturally supports accurate mouthings, essential cues that disambiguate similar manual signs in sign language avatars.
Supplementary Material: zip
Submission Number: 8
Loading