An Unsupervised Method to Select a Speaker Subset from Large Multi-Speaker Speech Synthesis DatasetsDownload PDFOpen Website

2020 (modified: 14 Mar 2022)INTERSPEECH 2020Readers: Everyone
Abstract: Large multi-speaker datasets for TTS typically contain diverse speakers, recording conditions, styles and quality of data. Although one might generally presume that more data is better, in this paper we show that a model trained on a carefully-chosen subset of speakers from LibriTTS provides significantly better quality synthetic speech than a model trained on a larger set. We propose an unsupervised methodology to find this subset by clustering per-speaker acoustic representations.
0 Replies

Loading