Post ASR Correction in Hindi: Comparing Language Models and Large Language Models in Low-Resource Scenarios

ACL ARR 2025 May Submission4737 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Automatic Speech Recognition (ASR) systems for low-resource languages like Hindi often suffer from transcription errors due to limited training data and linguistic complexity. Post-ASR correction is a key strategy to address these issues, particularly given the prevalence of code-mixing, compounding, and segmentation errors. We evaluate both fine-tuned language models (LMs) and large language models (LLMs) for this task, treating it as a high-overlap text editing problem. Experiments on the Lahaja dataset reveal a U-shaped inverse scaling trend: smaller models like mT5 and ByT5 outperform larger LLMs such as LLaMA (1B-70B) and GPT-4o-mini, even in in-context learning (ICL) settings. While ByT5 excels at character-level corrections, mT5 performs better on semantic errors. We also observe significant degradation in out-of-domain settings and propose mitigation strategies to retain domain fidelity. Our results highlight that inductive bias and task alignment often outweigh model scale for effective post-ASR correction in low-resource contexts.
Paper Type: Short
Research Area: Speech Recognition, Text-to-Speech and Spoken Language Understanding
Research Area Keywords: automatic speech recognition, low resource, large language model
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: Hindi
Submission Number: 4737
Loading