Data-efficient Targeted Token-level Preference Optimization for LLM-based Text-to-Speech

ACL ARR 2026 January Submission8958 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: text-to-speech, preference optimization, speech technologies
Abstract: Aligning text-to-speech (TTS) system outputs with human feedback through preference optimization has been shown to effectively improve the robustness and naturalness of LLM-based TTS models. Current approaches primarily require paired desirable and undesirable samples at the utterance level. However, such pairs are often limited in TTS output data, and utterance-level formulation prevents fine-grained token-level optimization needed for accurate pronunciation alignment. In this study, we propose TKTO that eliminates the need for paired data, enabling a more data-efficient training paradigm, and directly targets token-level units, automatically providing fine-grained alignment signals without token-level annotations. TKTO improves the challenging Japanese TTS accuracy by 39% and reduces CER by 54%, leveraging 6× more training data and assigning 12.8× stronger reward to targeted tokens.
Paper Type: Short
Research Area: Speech Processing and Spoken Language Understanding
Research Area Keywords: text-to-speech, speech technologies
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: Japanese, Chinese
Submission Number: 8958
Loading