ThinkEdit: Interpretable Weight Editing to Mitigate Overly Short Thinking in Reasoning Models

Published: 17 Oct 2025, Last Modified: 21 Nov 2025MATH-AI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Mathematical Reasoning, Weight Editing, Representation Engineering
TL;DR: We identify and edit “short-reasoning” attention heads, enhancing the mathematical reasoning ability of Large Reasoning Models.
Abstract: Recent studies have shown that Large Language Models (LLMs) augmented with chain-of-thought (CoT) reasoning demonstrate impressive problem-solving abilities. However, in this work, we identify a recurring issue where these models occasionally generate overly short reasoning, leading to degraded performance on even simple mathematical problems. Specifically, we investigate how reasoning length is embedded in the hidden representations of reasoning models. Our analysis reveals that reasoning length is governed by a linear direction in the representation space, allowing us to induce overly short reasoning by steering the model along this direction. Building on this insight, we introduce $\textbf{\textit{ThinkEdit}}$, an effective weight-editing approach to mitigate the issue of overly short reasoning. We first identify a small subset of attention heads (approximately 4\%) that predominantly drive short reasoning behavior, and then edit the output projection weights of these heads to remove the short reasoning direction. With changes to only 0.2\% of the parameters, $\textbf{\textit{ThinkEdit}}$ effectively reduces overly short reasoning and yields notable accuracy gains for short reasoning outputs (+6.39\%), along with an overall improvement (+3.34\%) across multiple math benchmarks.
Submission Number: 31
Loading