Confidence as Control: A Survey of Confidence Utilization in Large Language Models

15 Apr 2026 (modified: 30 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Most work on confidence in large language models has focused on estimation, uncertainty quantification, and calibration. In deployed systems, however, the key question is how confidence should be used to govern behavior. This survey studies $\textbf{confidence utilization}$: the use of confidence-related signals to control system decisions. We formalize this perspective through a unified framework in which confidence is defined over decision units under a local state and then consumed by a policy to determine actions. Using this lens, we organize the literature across full LLM lifecycle: training, inference, model selection and cascading, retrieval-augmented generation, risk management, and agentic control. We compare methods by signal source, decision unit, and functional role, and conclude by highlighting open challenges in confidence semantics, composition, source attribution, decision-aware evaluation, and robustness. Overall, the survey positions confidence not only as an estimation target, but as a control primitive for building more reliable and trustworthy LLM systems.
Submission Type: Long submission (more than 12 pages of main content)
Changes Since Last Submission: add one more recent work within our scope: SeLaR (https://arxiv.org/abs/2604.08299)
Assigned Action Editor: ~Matt_Kusner1
Submission Number: 8432
Loading