NTPP: Generative Speech Language Modeling for Dual-Channel Spoken Dialogue via Next-Token-Pair Prediction
Abstract: Inspired by the impressive capabilities of GPT-4o, there is growing interest in enabling speech language models (SLMs) to engage in natural, fluid spoken interactions with humans. Recent advancements have led to the development of several SLMs that demonstrate promising results in this area. However, current approaches have yet to fully exploit dual-channel speech data, which inherently captures the structure and dynamics of human conversation. In this work, we systematically explore the use of dual-channel speech data in the context of modern large language models, and introduce a novel generative modeling paradigm—Next-Token-Pair Prediction (NTPP)—to enable speaker-independent dual-channel spoken dialogue learning using decoder-only architectures for the first time. We evaluate our approach on standard benchmarks, and empirical results show that our proposed method, NTPP, significantly improves the conversational abilities of SLMs in terms of turn-taking prediction, response coherence, and naturalness. Moreover, compared to existing methods, NTPP achieves substantially lower inference latency, highlighting its practical efficiency for real-time applications. Demo and code can be found at https://audio-3059.pages.dev.
Lay Summary: Intelligent spoken dialogue systems play a crucial role in many real-world applications, particularly in human-machine interactions. However, enabling voice assistants to generate natural, coherent, and fluid responses remains a significant challenge. In this work, we develop a real-time voice interaction system by learning from human spoken dialogues through a novel approach. This method facilitates the generation of natural and seamless responses, effectively capturing the nuanced characteristics of human dialogue in a data-driven and efficient manner.
Link To Code: https://audio-3059.pages.dev
Primary Area: Applications->Language, Speech and Dialog
Keywords: Spoken dialogue language modeling, Autoregressive, Streaming, Decoder-only Transformer
Submission Number: 3059
Loading