A Unified Neural Codec Language Model for Selective Editable Text to Speech Generation

ACL ARR 2026 January Submission8892 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural codec language model, Selective speech editing, Text-to-Speech generation
Abstract: Neural codec language models achieve impressive zero-shot Text-to-Speech (TTS) by fully imitating the acoustic characteristics of a short speech prompt, including timbre, prosody, and paralinguistic information. However, such holistic imitation limits their ability to isolate and control individual attributes. In this paper, we present a unified codec language model SpeechEdit that extends zero-shot TTS with a selective control mechanism. By default, SpeechEdit reproduces the complete acoustic profile inferred from the speech prompt, but it selectively overrides only the attributes specified by explicit control instructions. To enable controllable modeling, SpeechEdit is trained on our newly constructed LibriEdit dataset, which provides delta (difference‑aware) training pairs derived from LibriHeavy. Experimental results show that our approach maintains naturalness and robustness while offering flexible and localized control over desired attributes. Audio samples are available at \url{https://speech-editing.github.io/speech-editing/}.
Paper Type: Long
Research Area: Speech Processing and Spoken Language Understanding
Research Area Keywords: Text-to-Speech, speech technologies
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 8892
Loading