Unravelling the Mechanisms of Manipulating Numbers in Language Models

12 Sept 2025 (modified: 06 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: interpretability, probing, representations, numbers, language models, factuality
TL;DR: We show that language models use interchangeable, universal and systematic representation of numbers, allowing accurate tracing of information -- and its accuracy and robustness -- within their computation.
Abstract: Recent work has shown that different large language models (LLMs) converge to similar and accurate input embedding representations for numbers. These findings conflict with the documented propensity of LLMs to produce erroneous outputs when dealing with numeric information. In this work, we aim to explain this conflict by exploring how language models \emph{manipulate} numbers and quantify the lower bounds of accuracy of these mechanisms. We find that despite surfacing errors, different language models learn interchangeable representations of numbers that are systematic, highly accurate and universal across their hidden states and the types of input contexts. This allows us to create universal probes for each LLM and to trace information --- including the causes of output errors --- to specific layers. Our results lay a fundamental understanding of how pre-trained LLMs manipulate numbers and outline the practical potential of more accurate probing techniques in addressed refinements of LLMs' architectures.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 4472
Loading