Improving Speech Inversion Through Self-Supervised Embeddings and Enhanced Tract Variables

Published: 01 Jan 2024, Last Modified: 16 Apr 2025EUSIPCO 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The performance of deep learning models depends significantly on their capacity to encode input features efficiently and decode them into meaningful outputs. Better input and output representation can potentially boost models' performance and generalization. In the context of acoustic-to-articulatory speech inversion (SI) systems, we study the impact of utilizing speech representations acquired via self-supervised learning (SSL) models, such as HuBERT compared to conventional acoustic features. Additionally, we investigate the incorporation of novel tract variables (TVs) through an improved geometric transformation model. By combining these two approaches, we improve the Pearson Product Moment Correlation (PPMC) scores which evaluate the accuracy of TV estimation of the SI system from 0.7452 to 0.8141, a 6.9% increase. Our findings underscore the profound influence of rich feature representations from SSL models, and improved geometric transformations with target TVs on the enhanced functionality of SI systems.
Loading