Keywords: Mechanistic Interpretability, Math Reasoning, Fine-tuning
TL;DR: CircuitTuning selectively updates sparse task-relevant circuits in LLMs, boosting math reasoning accuracy by up to 11.4% with minimal changes and little impact on other abilities.
Abstract: Prior studies investigating the internal workings of LLMs have uncovered sparse subnetworks, often referred to as circuits, that are responsible for performing specific tasks. Additionally, it has been shown that model performance improvement through fine-tuning often results from the strengthening of existing circuits in the model. Taken together, these findings suggest the possibility of intervening directly on such circuits to make precise, task-targeted updates. Motivated by these findings, we propose a novel method called CircuitTuning which identifies pivotal tokens from model reasoning traces as well as model components responsible for the desired task, and updates only those components. Applied to mathematical reasoning, it improves accuracy by up to +11.4% across multiple models while modifying as little as 1.59% of model components, with minimal impact on other abilities as measured by MMLU, TriviaQA, and TruthfulQA. These results demonstrate that targeted capabilities can be reliably enhanced by selectively updating a sparse set of model components.
Primary Area: interpretability and explainable AI
Submission Number: 14981
Loading