Leveraging Parameter-Efficient Transfer Learning for Multi-Lingual Text-to-Speech Adaptation

ACL ARR 2024 June Submission3313 Authors

16 Jun 2024 (modified: 09 Aug 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Different languages have distinct phonetic systems and vary in their prosodic features making it challenging to develop a Text-to-Speech (TTS) model that can effectively synthesise speech in multilingual settings. Furthermore, TTS architecture needs to be both efficient enough to capture nuances in multiple languages and efficient enough to be practical for deployment. The standard approach is to build transformer based model such as SpeechT5 and train it on large multilingual dataset. As the size of these models grow the conventional fine-tuning for adapting these model becomes impractical due to heavy computational cost. In this paper, we proposes to integrate parameter-efficient transfer learning (PETL) methods such as adapters and hypernetwork with TTS architecture for multilingual speech synthesis. Notably, in our experiments PETL methods able to achieve comparable or even better performance compared to full fine-tuning with only $\sim$2.5\% tunable parameters\footnote{The code and samples are available at:~\url{ https://anonymous.4open.science/r/multilingualTTS-BA4C}}.
Paper Type: Short
Research Area: Speech Recognition, Text-to-Speech and Spoken Language Understanding
Research Area Keywords: Multilingual Text-to-Speech, PETL, Adapters
Contribution Types: Reproduction study, Approaches low compute settings-efficiency, Publicly available software and/or pre-trained models
Languages Studied: German, French, Hungarian, Dutch, Finnish, Spanish
Submission Number: 3313
Loading