An analysis on the effects of speaker embedding choice in non auto-regressive TTSDownload PDF

Published: 15 Jun 2023, Last Modified: 20 Oct 2024SSW12Readers: Everyone
Keywords: speech synthesis, speaker embeddings, multi-speaker TTS, speaker disentanglement, speaker verification, non-autoregressive TTS
TL;DR: Different choices of speaker embeddings are used to condition a FastPitch architecture. Their influence over the multi-speaker output and core network learning process is analysed.
Abstract: In this paper we introduce a first attempt on understanding how a non-autoregressive factorised multi-speaker speech synthesis architecture exploits the information present in different speaker embedding sets. We analyse if jointly learning the representations, and initialising them from pretrained models determine any quality improvements for target speaker identities. In a separate analysis, we investigate how the different sets of embeddings impact the network's core speech abstraction (i.e. zero conditioned) in terms of speaker identity and representation learning. We show that, regardless of the used set of embeddings and learning strategy, the network can handle various speaker identities equally well, with barely noticeable variations in speech output quality, and that speaker leakage within the core structure of the synthesis system is inevitable in the standard training procedures adopted thus far.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/an-analysis-on-the-effects-of-speaker/code)
5 Replies

Loading