Investigation of using disentangled and interpretable representations with language conditioning for cross-lingual voice conversionDownload PDF

Anonymous

22 Oct 2018 (modified: 05 May 2023)NIPS 2018 Workshop IRASL Blind SubmissionReaders: Everyone
Abstract: We study the problem of cross-lingual voice conversion in non-parallel speech corpora and one-shot learning setting. Most prior work require either parallel speech corpora or enough amount of training data from a target speaker. However, we convert an arbitrary sentences of an arbitrary source speaker to target speaker's given only one target speaker training utterance. To achieve this, we formulate the problem as learning disentangled speaker-specific and context-specific representations and follow the idea of [1] which uses Factorized Hierarchical Variational Autoencoder (FHVAE). After training FHVAE on multi-speaker training data, given arbitrary source and target speakers' utterance, we estimate those latent representations and then reconstruct the desired utterance of converted voice to that of target speaker. We use multi-language speech corpus to learn a universal model that works for all of the languages. We investigate the use of a one-hot language embedding to condition the model on the language of the utterance being queried and show the effectiveness of the approach. We conduct voice conversion experiments with varying size of training utterances and it was able to achieve reasonable performance with even just one training utterance. We also investigate the effect of using or not using the language conditioning. Furthermore, we visualize the embeddings of the different languages and sexes. Finally, in the subjective tests, for one language and cross-lingual voice conversion, our approach achieved moderately better or comparable results compared to the baseline in speech quality and similarity.
TL;DR: We use a Variational Autoencoder to separate style and content, and achieve voice conversion by modifying style embedding and decoding. We investigate using a multi-language speech corpus and investigate its effects.
Keywords: voice conversion, one-shot learning, cross-lingual, variational autoencoder
4 Replies

Loading