Human-Alignment and Calibration of Inference-Time Uncertainty in Large Language Models

ICLR 2026 Conference Submission22914 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, alignment, uncertainty quantification, calibration
TL;DR: We investigated whether a collection of LLM inference-time uncertainty measures show evidence of human similarity and calibration, with positive results for both.
Abstract: There has been much recent interest in evaluating large language models for uncertainty calibration to facilitate model control and modulate user trust. Inference time uncertainty, which may provide a real-time signal to the model or external control modules, is particularly important for applying these concepts to improve the large language model user experience. While many of the existing papers consider model calibration, comparatively little work has sought to evaluate how closely model uncertainty aligns to human uncertainty. In this work, we evaluate a collection of inference-time uncertainty measures, using both established metrics and novel variations, to determine how closely they align with both human group-level uncertainty and traditional notions of model calibration. We find that numerous measures show evidence of strong alignment to human uncertainty, even despite the lack of alignment to human answer preference. For those successful metrics, we find moderate to strong evidence of model calibration in terms of both correctness correlation and distributional analysis.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22914
Loading