Bridge-TTS: Text-to-Speech Synthesis with Schrodinger Bridge

17 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Diffusion Models, Schrodinger Bridge, Text-to-Speech Synthesis, High-Quality Generation, Efficient Sampling
Abstract: In text-to-speech (TTS) synthesis, diffusion models have achieved promising generation quality. However, with the pre-defined data-to-noise diffusion process, their prior distribution is restricted to a noisy representation, which provides little information of the generation target. In this work, we present a novel TTS system, Bridge-TTS, making the first attempt to substitute the noisy Gaussian prior in established diffusion-based TTS methods with a clean and deterministic one, which provides strong structural information of the target. Specifically, we leverage the latent representation obtained from text input as our prior, and build a fully tractable Schrodinger bridge (SB) between it and the ground-truth mel-spectrogram, leading to a faster generation process. Moreover, the tractability and flexibility of our proposed SB formulation allow us to empirically study the noise schedule and the model parameterization in training, as well as developing training-free stochastic and deterministic samplers with theory-grounded analyses of the bridge SDE and ODE, which further enrich our design spaces for exploring better generation performance. Experimental results on the LJ-Speech dataset illustrate the effectiveness of our method in terms of synthesis quality and sampling efficiency, outperforming the diffusion counterpart Grad-TTS in 50-step synthesis and strong fast TTS models in few-step scenario.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1017
Loading