Keywords: Chain of Thought/Reasoning models, Sparse Autoencoders
Other Keywords: Low-Rank Adapters
TL;DR: We train minimal adapters to finetune reasoning models and apply interpretability techniques to these adapters.
Abstract: Reasoning models leverage inference-time compute to significantly enhance the performance of language models on difficult logical tasks, and have become a dominating paradigm in frontier LLMs. Despite their wide adoption, the mechanisms underpinning the enhanced performance of these reasoning models are not well understood. In this work, we show that the majority of new capabilities in reasoning models can be elicited by small, single-rank changes to base model parameters, with many of these changes being interpretable. Specifically, we use a rank-1 LoRA to create a minimal parameter adapter for \texttt{Qwen-2.5-32B-Instruct} which recovers 73-90\% of reasoning-benchmark performance compared to a full-parameter finetune. We find that the activations of this LoRA are as interpretable as MLP neurons, and fire for reasoning-specific behaviors. Finally, we train a sparse autoencoder on the entire activation state of this LoRA and identify fine-grained and monosemantic features. Our findings reveal how reasoning performance can arise largely from minimal changes to base model parameters. More broadly, our work shows that parameter-efficient training methods can be used as a targeted lens for uncovering fundamental insights about language model behavior and dynamics.
Submission Number: 188
Loading