Multiple Token Divergence: Measuring and Steering In-Context Computation Density

ICLR 2026 Conference Submission21842 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Language models, in-context learning, reasoning, interpretability, decoding
TL;DR: If a shallow auxiliary prediction head struggles to approximiate the full next token prediction, we can infer that the model is doing complex in-context computation.
Abstract: Measuring the in-context computational effort of language models is a key challenge, as metrics like next-token loss fail to capture reasoning complexity. Prior methods based on latent state compressibility can be invasive and unstable. We propose Multiple Token Divergence (MTD), a simple measure of computational effort defined as the KL divergence between a model's full output distribution and that of a shallow, auxiliary prediction head. MTD can be computed directly from pre-trained models with multiple prediction heads, requiring no additional training. Building on this, we introduce Divergence Steering, a novel decoding method to control the computational character of generated text. We empirically show that MTD is more effective than prior methods at distinguishing complex tasks from simple ones. On mathematical reasoning benchmarks, MTD correlates positively with problem difficulty. Lower MTD is associated with more accurate reasoning. MTD provides a practical, lightweight tool for analyzing and steering the computational dynamics of language models.
Supplementary Material: pdf
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21842
Loading