Do NOT Think That Much for 2+3=? On the Overthinking of Long Reasoning Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: The article addresses overthinking in long reasoning models and propose optimizations to enhance efficiency without compromising performance.
Abstract: The remarkable performance of long reasoning models can be attributed to their ability to emulate human-like long-time thinking during inference. These models employ extended chain-of-thought (CoT) processes, exploring multiple strategies to enhance problem-solving capabilities. However, a critical question remains: How to intelligently and efficiently scale computational resources during testing. This paper presents the first comprehensive study on the prevalent issue of overthinking in these models, where long reasoning models generate redundant solutions that contribute minimally to accuracy and diversity, thereby wasting computational resources on simple problems with minimal benefit. We introduce novel efficiency metrics from both outcome and process perspectives to evaluate the rational use of computational resources by long reasoning models. Using a self-training paradigm, we propose strategies to mitigate overthinking, simplifying reasoning processes without compromising accuracy. Experimental results show that our approach successfully reduces computational overhead while preserving model performance across a range of testsets with varying difficulty levels, such as GSM8K, MATH500, GPQA, and AIME. Our code is open-source and available at https://github.com/galaxyChen/overthinking.
Lay Summary: Is the long reasoning model efficient when thinking? We found that the long reasoning model has a serious overthinking problem: the model will repeatedly generate redundant homogeneous solutions, which have little impact on the accuracy but greatly increase the cost of inference. We comprehensively analyzed this phenomenon and proposed a training strategy based on the preference optimization algorithm. Experiments show that our method can greatly reduce the number of tokens in generation while maintaining the mathematical reasoning ability.
Link To Code: https://github.com/galaxyChen/overthinking
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, OpenAI-o1, QwQ, R1, Reasoning, Math, Overthinking, Efficiency
Submission Number: 2537
Loading