From Tower to Spire: Adding the Speech Modality to a Text-Only LLM

ACL ARR 2025 May Submission4000 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce Spire, a speech-augmented language model (LM) capable of both translating and transcribing speech input from English into 10 other languages as well as translating text input in both language directions. Spire integrates the speech modality into an existing multilingual LM (MLM) via speech discretization and continued pre-training using only $42.5$K hours of speech. In particular, we adopt the pretraining framework of MLMs and treat discretized speech input as an additional *translation language*. This approach not only equips the MLM with speech capabilities, but also preserves its strong text-only performance. We achieve this using significantly less data than existing speech LMs, demonstrating that discretized speech input integration as an additional language is feasible during LM adaptation. We will make our code and models available to the community.
Paper Type: Long
Research Area: Speech Recognition, Text-to-Speech and Spoken Language Understanding
Research Area Keywords: spoken language translation, speech translation, automatic speech recognition, speech technologies
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English, Dutch, German, Spanish, French, Italian, Portuguese, Korean, Russian, Chinese
Keywords: spoken language translation, speech translation, automatic speech recognition, speech technologies
Submission Number: 4000
Loading