Rewarding Doubt: A Reinforcement Learning Approach to Calibrated Confidence Expression of Large Language Models

ICLR 2026 Conference Submission18260 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Confidence Calibration, Uncertainty Estimation, Large Language Models
TL;DR: We propose Rewarding Doubt, an RL-based approach that models confidence estimation as a betting game, optimizing LLMs for calibrated confidence in factual answers.
Abstract: A safe and trustworthy use of Large Language Models (LLMs) requires an accurate expression of confidence in their answers. We propose a novel Reinforcement Learning approach that allows to directly fine-tune LLMs to express calibrated confidence estimates alongside their answers to factual questions. Our method optimizes a reward based on the logarithmic scoring rule, explicitly penalizing both over- and under-confidence. This encourages the model to align its confidence estimates with the actual predictive accuracy. The optimal policy under our reward design would result in perfectly calibrated confidence expressions. Unlike prior approaches that decouple confidence estimation from response generation, our method integrates confidence calibration seamlessly into the generative process of the LLM. Empirically, we demonstrate that models trained with our approach exhibit substantially improved calibration and generalize to unseen tasks without further fine-tuning, suggesting the emergence of general confidence awareness. We provide our training and evaluation code in the supplementary and will make it publicly available upon acceptance.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 18260
Loading