Bridging the Gap: Integrating Pre-Trained Speech Enhancement and Recognition Models for Robust Speech Recognition

Published: 01 Jan 2024, Last Modified: 15 May 2025EUSIPCO 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Noise robustness is critical when applying automatic speech recognition (ASR) in real-world scenarios. One solution involves using speech enhancement (SE) models as the front end of ASR. However, neural network-based (NN-based) SE often introduces artifacts into the enhanced signals and harms ASR performance, particularly when SE and ASR are independently trained. Therefore, this study introduces a simple yet effective SE post-processing technique to address the gap between various pretrained SE and ASR models. A bridge module, which is a lightweight NN, is proposed to evaluate the signal-level information of the speech signal. Subsequently, using the signal-level information, the observation adding technique is applied to reduce SE's shortcomings effectively. The experimental results demonstrate the success of our method in integrating diverse pretrained SE and ASR models, considerably boosting the ASR robustness. Crucially, no prior knowledge of the ASR or speech contents is required during the training or inference stages. Moreover, the effectiveness of this approach extends to different datasets without necessitating the fine-tuning of the bridge module, ensuring efficiency and improved generalization.
Loading