Keywords: Reasoning, Circuit, Post Training, Multi-Head Attention
Abstract: The remarkable capabilities of modern large reasoning models are largely unlocked through post-training techniques such as supervised fine-tuning (SFT) and reinforcement learning (RL). However, the architectural mechanisms behind such improvements remain largely opaque. In this work, we use circuit analysis to demonstrate that post-training for complex reasoning sparks the emergence of novel, functionally specialized attention heads.
These heads collectively support structured reasoning and computation.
Our comparative analysis across Qwen families and Qwen-based DeepSeek-distilled model reveals that these emergent heads evolve differently under different training regimes. Distillation and SFT foster a cumulative addition of stable reasoning heads.
In contrast, group relative policy optimization (GRPO) operates in a dynamic search mode: relatively few attention heads are iteratively activated, evaluated, and pruned, with their survival closely tracking fluctuations in the task reward signal. Furthermore, we find that controllable "think on/off" models do not possess dedicated "thinking" heads.
Instead, turning off explicit reasoning triggers a broader—but less efficient—set of compensatory heads.
Through ablation and qualitative analyses, we connect these circuit-level dynamics to a crucial performance trade-off:
strengthened heads enable sophisticated problem-solving strategies for difficult problems but can also introduce ``over-thinking" failure modes, such as calculation errors or logical loops on simpler tasks. These findings connect circuit-level dynamics to macro-level performance, identifying an inherent tension where complex reasoning comes at the cost of elementary computations. More broadly, our work points to future directions for training policy design, emphasizing the need to balance the development of effective reasoning strategies with the assurance of reliable, flawless execution.
Primary Area: interpretability and explainable AI
Submission Number: 16692
Loading