Improving Semantic Uncertainty Quantification in Question Answering via Token-Level Temperature Scaling
Keywords: Semantic Uncertainty Quantification, Semantic Calibration, LLMs, QA
TL;DR: Paper introduces new semantic confidence measures and shows that principled token-level temperature optimisation improves calibration, discrimination, and entropy-based semantic UQ in LLMs, outperforming heuristic and complex calibration methods.
Abstract: Calibration is central to reliable semantic uncertainty, yet prior work has largely focused on discrimination, neglecting calibration at the level of meaning. As calibration and discrimination capture distinct aspects of uncertainty, focusing on discrimination alone yields an incomplete picture of semantic reliability. We address this gap by systematically evaluating both aspects across a broad set of confidence measures. We show that current approaches, particularly fixed-temperature heuristics, produce systematically miscalibrated and poorly discriminative semantic confidence distributions. We demonstrate that optimising a single scalar temperature, which we argue provides a suitable inductive bias for semantic uncertainty quantification, is a surprisingly simple yet effective solution. Our exhaustive evaluation confirms that temperature scaling consistently improves semantic calibration, discrimination, and downstream entropy, outperforming both heuristic baselines and more expressive token-level recalibration methods.
Submission Number: 13
Loading