Sequences of Logits Reveal the Low Rank Structure of Language Models

Published: 26 Jan 2026, Last Modified: 02 Mar 2026ICLR 2026 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large language models, low-rank structure
TL;DR: We exploit the low-rank structure of the logit matrices of LLMs to draw new empirical and theoretical conclusions.
Abstract: A major problem in the study of large language models is to understand their inherent low-dimensional structure. We introduce an approach to study the low-dimensional structure of language models at a model-agnostic level: as sequential probabilistic models. We first empirically demonstrate that a wide range of modern language models exhibit low-rank structure: in particular, matrices built from the model's logits for varying sets of prompts and responses have low approximate rank. We then show that this low-rank structure can be leveraged for generation --- in particular, we can generate a response to a target prompt using a linear combination of the model's outputs on unrelated, or even nonsensical prompts. On the theoretical front, we observe that studying the approximate rank of language models in the sense discussed above yields a simple universal abstraction whose theoretical predictions parallel our experiments. We then analyze the representation power of the abstraction and give provable learning guarantees.
Primary Area: learning theory
Submission Number: 8100
Loading