Fluid Reasoning Representations

Published: 30 Sept 2025, Last Modified: 17 Nov 2025Mech Interp Workshop (NeurIPS 2025) PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Chain of Thought/Reasoning models, Causal interventions, Steering
TL;DR: Reasoning models like QwQ-32B progressively adapt their internal representations during extended reasoning to develop abstract, symbolic encodings that enable better performance on obfuscated planning tasks.
Abstract: Reasoning language models, which generate long chains of thought, dramatically outperform non-reasoning language models on abstract problems. However, the internal model mechanisms that allow this superior performance remain poorly understood. We present a mechanistic analysis of how QwQ-32B -- a model specifically trained to produce extensive reasoning traces -- process abstract structural information. On Mystery Blocksworld -- a semantically obfuscated planning domain -- we find that QwQ-32B gradually improves its internal representation of actions and concepts during reasoning. The model develops abstract encodings that focus on structure rather than specific action names. Through steering experiments, we establish causal evidence that these adaptations improve problem solving: injecting refined representations from successful traces boosts accuracy, while symbolic representations can replace many obfuscated encodings with minimal performance loss. We find that one of the factors driving reasoning model performance is in-context refinement of token representations, which we dub Fluid Reasoning Representations.
Submission Number: 143
Loading