InstructTTS: Modelling Expressive TTS in Discrete Latent Space With Natural Language Style Prompt

Dongchao Yang, Songxiang Liu, Rongjie Huang, Chao Weng, Helen Meng

Published: 01 Jan 2024, Last Modified: 07 Jan 2026IEEE/ACM Transactions on Audio, Speech, and Language ProcessingEveryoneRevisionsCC BY-SA 4.0
Abstract: Expressive text-to-speech (TTS) aims to synthesize speech with varying speaking styles to better reflect human speech patterns. In this study, we attempt to use natural language as a style prompt to control the styles in the synthetic speech, e.g., “Sigh tone in full of sad mood with some helpless feeling”. Considering that there is no existing TTS corpus that is suitable to benchmark this novel task, we first construct a speech corpus whose speech samples are annotated with not only content transcriptions but also style descriptions in natural language. Then we propose an expressive TTS model, named InstructTTS, which is novel in the sense of the following aspects: (1) We fully take advantage of self-supervised learning and cross-modal metric learning and propose a novel three-stage training procedure to obtain a robust sentence embedding model that can effectively capture semantic information from the style prompts and control the speaking style in the generated speech. (2) We propose to model acoustic features in discrete latent space and train a novel discrete diffusion probabilistic model to generate vector-quantized (VQ) acoustic tokens rather than the commonly-used mel spectrogram. (3) We jointly apply mutual information (MI) estimation and minimization during acoustic model training to minimize style-speaker and style-content MI, avoiding possible content and speaker information leakage from the style prompt. Extensive objective and subjective evaluation has been conducted to verify the effectiveness and expressiveness of InstructTTS. Experimental results show that InstructTTS can synthesize high-fidelity and natural speech with style prompts controlling the speaking style.
Loading