Gesture2Speech: How Far Can Hand Movements Shape Expressive Speech?

AAAI 2026 Workshop BEEU Submission11 Authors

Published: 18 Nov 2025, Last Modified: 18 Nov 2025BEEU 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gesture-conditioned Text-to-Speech, Expressive Speech Synthesis, Prosody Modeling, Gesture–Speech Alignment, Embodied Communication, Bodily Expressed Emotion, Multimodal Fusion, Mixture of Experts
TL;DR: Multimodal text-to-speech system that uses hand gesture cues to generate expressive, temporally aligned speech
Abstract: Human communication seamlessly integrates speech and bodily motion, where hand gestures naturally complement vocal prosody to express intent, emotion, and emphasis. While recent text-to-speech (TTS) systems have begun incorporating multimodal cues such as facial expressions or lip movements, the role of hand gestures in shaping prosody remains largely underexplored. We propose a novel multimodal TTS framework, Gesture2Speech, that leverages visual gesture cues to modulate prosody in synthesized speech. Motivated by the observation that confident and expressive speakers coordinate gestures with vocal prosody, we introduce a multimodal Mixture-of-Experts (MoE) architecture that dynamically fuses linguistic content and gesture features within a dedicated style extraction module. The fused representation conditions an LLM-based speech decoder, enabling prosodic modulation that is temporally aligned with hand movements. We further design a gesture-speech alignment loss that explicitly models their temporal correspondence to ensure fine-grained synchrony between gestures and prosodic contours. Evaluations on the PATS dataset show that Gesture2Speech outperforms state-of-the-art baselines in both speech naturalness and gesture-speech synchrony. To the best of our knowledge, this is the first work to utilize hand gesture cues for prosody control in neural speech synthesis. Demo samples are provided at URL: https://tinyurl.com/3wv58sbw
Submission Number: 11
Loading