Keywords: Uncertainty Attribution, Self-Aware LLM Evaluation, Reinforcement Learning for LLMs
Abstract: Reliable Large Language Models (LLMs) should abstain when confidence is insufficient. However, prior studies often treat refusal as a generic "I don't know", failing to distinguish input-level ambiguity (data uncertainty) from capability limitations (model uncertainty). This indiscrimination limits downstream action decisions like requesting clarification or invoking external tools.
In this work, we introduce UA-Bench, a benchmark of over 3,500 questions drawn from six datasets spanning knowledge-intensive and reasoning-intensive tasks, designed to evaluate explicit uncertainty attribution.
An evaluation of 18 frontier LLMs shows that even state-of-the-art models struggle to reliably discriminate between data uncertainty and model uncertainty, and that high answer accuracy does not necessarily imply strong uncertainty attribution ability.
To narrow this gap, we propose a lightweight data synthesis and reinforcement learning strategy. After training, a 4B model can improve uncertainty attribution while preserving answer accuracy.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: uncertainty, feature attribution, natural language explanations
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 5817
Loading