Seeing is Believing: Emotion-Aware Audio-Visual Language Modeling for Expressive Speech Generation

Published: 01 Jan 2025, Last Modified: 07 Nov 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We present an Audio-Visual Language Model (AVLM) for expressive speech generation by integrating full-face visual cues into a pre-trained expressive speech model. We explore multiple visual encoders and multimodal fusion strategies during pre-training to identify the most effective integration approach. Subsequent fine-tuning on emotion recognition and expressive dialogue tasks yields substantial gains over speech-only baselines (e.g., +5 F1 in emotion recognition). AVLM highlights the value of expressive visual information in guiding speech generation and offers a foundation for end-to-end multimodal conversational systems.
Loading