Abstract: Enabling digital humans to express rich emotions has significant applications in dialogue systems, gaming, and other interactive scenarios.
While recent advances in talking head synthesis have achieved impressive results in lip synchronization, they tend to overlook the rich and dynamic nature of facial expressions.
To fill this critical gap, we introduce an end-to-end text-to-expression model that explicitly focuses on emotional dynamics.
Our model learns expressive facial variations in a continuous latent space and generates expressions that are diverse, fluid, and emotionally coherent.
To support this task, we introduce EmoAva, a large-scale and high-quality dataset containing 15,000 text–3D expression pairs.
Extensive experiments on both existing datasets and EmoAva demonstrate that our method significantly outperforms baselines across multiple evaluation metrics,
marking a significant advancement in the field.
Paper Type: Long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Research Area Keywords: cross-modal content generation,multi-modal dialogue systems
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 1864
Loading