Multi-Hall-SA: A Cross-lingual Benchmark for Multi-Type Hallucination Detection in Low-Resource South African Languages
Abstract: Hallucinations generated by Large Language Models (LLMs) pose significant challenges for their application to low-resources languages.
We present Multi-Hall-SA, a cross-lingual benchmark for hallucination detection spanning English and four low-resource South African languages: isiZulu, isiXhosa, Sepedi, and Sesotho. Derived from government texts, this benchmark categorizes hallucinations into four types: temporal shifts, entity errors, numerical inaccuracies, and location mistakes. Our cross-lingual alignment methodology enables direct performance comparison between high-resource and low-resource languages, revealing significant gaps in detection capabilities. Evaluation across four state-of-the-art models shows they detect up to 23.6\% fewer hallucinations in South African languages compared to English. Knowledge augmentation substantially reduces this disparity, decreasing cross-lingual performance gaps by 59.4\% on average. Beyond introducing a new resource for low-resource languages, Multi-Hall-SA provides a systematic framework for evaluating and improving factual reliability across linguistic boundaries, advancing more inclusive and equitable AI development.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: hallucinations,llms,low-resource
Contribution Types: Model analysis & interpretability, Approaches to low-resource settings, Data resources
Languages Studied: english, sepedi, isizulu, isixhosa, sesotho
Submission Number: 7825
Loading