Aligning Spoken Dialogue Models from User Interactions

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
TL;DR: Aligning real-time multistream spoken dialogue model with user interaction data and AI feedback
Abstract: We propose a novel preference alignment framework for improving spoken dialogue models on real-time conversations from user interactions. Current preference learning methods primarily focus on text-based language models, and are not directly suited to the complexities of real-time speech interactions, with richer dynamics (e.g. interruption, interjection) and no explicit segmentation between speaker turns.We create a large-scale dataset of more than 150,000 preference pairs from raw multi-turn speech conversations, annotated with AI feedback, to cover preferences over both linguistic content and temporal context variations. We leverage offline alignment methods to finetune a full-duplex autoregressive speech-to-speech model. Extensive experiments demonstrate that feedback on generic conversations can be consistently effective in improving spoken dialogue models to produce more factual, safer and more contextually aligned interactions. We deploy the finetuned model and conduct holistic human evaluations to assess the impact beyond single-turn conversations. Our findings shed light on the importance of a well-calibrated balance among various dynamics, crucial for natural real-time speech dialogue systems.
Lay Summary: It's only very recently that we have started to see chatbots that are not pure "walkie-talkies" - ones that can have fully real-time conversations with humans. This leads to additional challenges: we want the models to behave well, providing factual information and responding appropriately to sensitive topics, but real-world real-time spoken conversations are very different and much messier than written ones, as people interrupt each other, overlap, and don't always take turns cleanly. We collected a large number of spontaneous conversations with Moshi, a voice chatbot that can listen and respond to users in real time, and used a second AI "judge" to automatically identify undesirable replies and propose better alternatives. We used both replies to create "bad-versus-good example pairs" that teach the chatbot to improve. Using feedback on generic conversations consistently helps the model behave better, and human evaluations show improvements extend beyond individual responses to overall conversation quality. Since this pipeline runs offline on everyday interactions, we provide a practical way to make voice chatbots safer and better at natural conversations, paving the way for more reliable real-time voice systems.
Primary Area: Applications->Language, Speech and Dialog
Keywords: Speech Alignment, Audio Language Model, Conversational Model
Submission Number: 12742
Loading