MKA: Leveraging Cross-Lingual Consensus for Model Abstention

Published: 05 Mar 2025, Last Modified: 31 Mar 2025BuildingTrustEveryoneRevisionsBibTeXCC BY 4.0
Track: Long Paper Track (up to 9 pages)
Keywords: model abstention, factuality, multilingual models, cross-lingual consensus, reliability, hallucination
TL;DR: A pipeline that helps LLMs decide when to abstain using their multilingual knowledge.
Abstract: Reliability of LLMs is questionable even as they get better at more tasks. A wider adoption of LLMs is contingent on whether they are usably factual. And if they are not factual, on whether they can properly calibrate their confidence in their responses. This work focuses on utilizing the multilingual knowledge of an LLM to inform its decision to abstain or answer when prompted. We develop a multilingual pipeline to calibrate the model's confidence and let it abstain when uncertain. We run several multilingual models through the pipeline to profile them based on various metrics, across different languages. We find that the performance of the pipeline varies by model and language, but that in general they benefit from it. This is evidenced by the accuracy improvement of $71.2$% for Bengali over a baseline performance without the pipeline. Even a high-resource language like English sees a $15.5$% improvement.
Submission Number: 146
Loading