Balancing Speech Understanding and Generation Using Continual Pre-training for Codec-based Speech LLM

ACL ARR 2025 February Submission4553 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent efforts have extended textual LLMs to the speech domain, yet a key challenge remains: balancing speech understanding and generation while avoiding catastrophic forgetting when integrating acoustically rich codec-based representations into models originally trained on text. In this work, we propose a novel approach that leverages continual pre-training (CPT) on a pre-trained textual LLM to create a codec-based speech language model. This strategy mitigates the modality gap between text and speech, preserving the linguistic reasoning of the original model while enabling high-fidelity speech synthesis. We validate our approach with extensive experiments across multiple tasks—including automatic speech recognition, text-to-speech, speech-to-text translation, and speech-to-speech translation (S2ST)—demonstrating that our model achieves superior TTS performance and, notably, the first end-to-end S2ST system based on neural codecs.
Paper Type: Long
Research Area: Speech Recognition, Text-to-Speech and Spoken Language Understanding
Research Area Keywords: speech LLM, speech-to-speech translation, speech understanding
Contribution Types: NLP engineering experiment
Languages Studied: English, Mandarin
Submission Number: 4553
Loading