The Confidence Manifold: Geometric Structure of Correctness Representations in Language Models

Published: 02 Mar 2026, Last Modified: 30 Mar 2026Agentic AI in the Wild: From Hallucinations to Reliable Autonomy PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Hallucination detection, Confidence estimation, linear probes, representation geometry, intrinsic dimensionality, interpretability, activation analysis, manifold learning
TL;DR: Correctness in LLMs is encoded in a 3-8D subspace where simple centroid distance matches trained probes (0.90 AUC), while output-based methods achieve near-chance.
Abstract: When a language model asserts that ``the capital of Australia is Sydney,'' does it know this is wrong? We characterize the geometry of correctness representations across 9 models from 5 architecture families. The structure is simple: the discriminative signal occupies 3-8 dimensions, performance degrades with additional dimensions, and no nonlinear classifier improves over linear separation. Centroid distance in the low-dimensional subspace matches trained probe performance (0.90 AUC), enabling few-shot detection: on GPT-2, 25 labeled examples achieve 89% of full-data accuracy. We validate causally through activation steering: the learned direction produces 10.9 percentage point changes in error rates while random directions show no effect. Internal probes achieve 0.80-0.97 AUC; output-based methods (P(True), semantic entropy) achieve only 0.44-0.64 AUC. The correctness signal exists internally but is not expressed in outputs. That centroid distance matches probe performance indicates class separation is a mean shift, making detection geometric rather than learned.
Submission Number: 65
Loading