Effect and Analysis of Large-scale Language Model Rescoring on Competitive ASR SystemsDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 23 Oct 2023INTERSPEECH 2022Readers: Everyone
Abstract: Large-scale language models (LLMs) such as GPT-2, BERT and RoBERTa have been successfully applied to ASR N-best rescoring. However, whether or how they can benefit competitive, near state-of-the-art ASR systems remains unexplored. In this study, we incorporate LLM rescoring into one of the most competitive ASR baselines: the Conformer-Transducer model. We demonstrate that consistent improvement is achieved by the LLM's bidirectionality, pretraining, in-domain finetuning and context augmentation. Furthermore, our lexical analysis sheds light on how each of these components may be contributing to the ASR performance.
0 Replies

Loading