Better, Faster: Harnessing Self-Improvement in Large Reasoning Models

ICLR 2026 Conference Submission22671 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Self-improvement, Large Language Model, Reasoning, Post-training
TL;DR: We propose an innovative self-improvement method to enhance the reasoning performance and efficiency of large reasoning models.
Abstract: While large reasoning models (LRMs) trained with explicit reasoning trajectories have demonstrated impressive performance, obtaining high-quality trajectories is often costly and time-consuming. Hence, recent literature introduces a self-improvement paradigm that enables LRMs to improve themselves by self-generating reasoning trajectories as training data without external supervision. However, we find that this method often falls short in complex reasoning tasks and even leads to model collapse. Through a series of preliminary analyses, we reveal two shortcomings of self-improvement in LRMs: (1) data imbalance, where most training samples are simple, but the challenging yet crucial samples are scarce; (2) overthinking, where many undesired samples with redundant and repetitive reasoning steps are used for self-training. To this end, we propose HSIR, which effectively Harnesses Self-Improvement in large Reasoning models via two simple-yet-effective approaches. Specifically, HSIR introduces a verify-then-exit sampling strategy to mitigate data imbalance by efficiently collecting more accurate solutions for difficult queries, and designs an Intrinsic Diversity score to quantify overthinking and filter out the undesired solutions. We apply HSIR to various post-training paradigms, among which we further propose H-GRPO, an enhanced GRPO algorithm that leverages the intrinsic diversity as an external reward to encourage concise and diverse reasoning via reinforcement learning. Extensive results show that HSIR not only effectively enhances the reasoning performance, i.e., bringing up to +10.9% average performance gains, but also significantly improves the reasoning efficiency by reducing up to 42.4% relative inference overhead.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22671
Loading