KERLQA: Knowledge-Enhanced Reinforcement Learning for Question Answering in Low-resource Languages

ACL ARR 2025 February Submission6583 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Question answering in low-resource languages poses challenges for Large Language Models due to limited training data and knowledge resources. We propose Knowledge-Enhanced Reinforcement Learning for Question Answering (KERLQA), a novel approach integrating external knowledge with reinforcement learning to optimize model behavior. KERLQA employs a graph neural network for joint reasoning over question context and knowledge sources, while introducing an abstention mechanism to address the heightened risk of hallucination in low-resource settings. This mechanism allows the model to refrain from answering when uncertain, which is particularly important for low-resource languages where knowledge gaps are more prevalent. We evaluate KERLQA on CommonsenseQA and OpenBookQA across English and four low-resource South African languages: isiZulu, isiXhosa, Sepedi, and SeSotho. Results show KERLQA outperforms baselines and state-of-the-art systems, with notable improvements in low-resource settings. Our error analysis reveals distinct patterns of knowledge gaps, reasoning failures, and abstention errors across languages, with higher abstention rates in low-resource languages confirming the model's ability to recognize and mitigate knowledge gaps.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Efficient/Low-Resource Methods for NLP, Question Answering, Multilingualism and Cross-Lingual NLP
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings
Languages Studied: English, isiZulu, isiXhosa, Sepedi, SeSotho
Submission Number: 6583
Loading