Stutter-TTS: Controlled Synthesis and Improved Recognition of Stuttered SpeechDownload PDF

03 Oct 2022 (modified: 05 May 2023)Neurips 2022 SyntheticData4MLReaders: Everyone
Keywords: stuttered speech recognition, text-to-speech synthesis
TL;DR: We presnt a novel text-to-speech system, Stutter-TTS, that can fabricate stuttered speech in a highly controlled manner and demonstrate the use of synthetic stuttered speech to improve ASR systems on the recognition of real stuttered speech.
Abstract: Stuttering is a speech disorder where the natural flow of speech is interrupted by blocks, repetitions or prolongations of syllables, words and phrases. The majority of existing automatic speech recognition (ASR) interfaces perform poorly on utterances with stutter, mainly due to lack of matched training data. Synthesis of speech with stutter thus presents an opportunity to improve ASR for this type of speech. We describe Stutter-TTS, an end-to-end neural text-to-speech model capable of synthesizing diverse types of stuttering utterances. We develop a simple, yet effective prosody-control strategy whereby additional tokens are introduced into source text during training to represent specific stuttering characteristics. By choosing the position of the stutter tokens, Stutter-TTS allows word-level control of where stuttering occurs in the synthesized utterance. We are able to synthesize stutter events with high accuracy (F1-scores between 0.63 and 0.84, depending on stutter type). By fine-tuning an ASR model on synthetic stuttered speech we are able to reduce word error by 5.7% relative on stuttered utterances, with only minor (< 0.2% relative) degradation for fluent utterances.
4 Replies

Loading