Audio Data Augmentation for Acoustic-to-Articulatory Speech Inversion

Published: 01 Jan 2023, Last Modified: 16 Apr 2025EUSIPCO 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Data augmentation has proven to be a promising prospect in improving the performance of deep learning models by adding variability to training data. In previous work with developing a noise robust acoustic-to-articulatory speech inversion (SI) system, we have shown the importance of noise augmentation to improve the performance of speech inversion in ‘noisy’ speech conditions. In this work, we extend this idea of data augmentation to improve the SI systems on both the clean speech and noisy speech data by experimenting three data augmentation methods. We also propose a Bidirectional Gated Recurrent Neural Network as the speech inversion system instead of the previously used feed forward neural network. The inversion system uses mel-frequency cepstral coefficients (MFCCs) as the input acoustic features and six vocal tract-variables (TVs) as the output articulatory targets. The Performance of the system was measured by computing the correlation between estimated and actual TVs on the Wisconsin X-ray microbeam database. The proposed speech inversion system shows a 5% relative improvement in correlation over the baseline noise robust system for clean speech data. The pretrained model, when adapted to each unseen speaker in the test set, improves the average correlation by another 6%.
Loading