Unified Data Selection for LLM Reasoning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: llms, reasoning, data selection
TL;DR: We propose a novel metric that measures reasoning quality, enabling a unified and more efficient data-centric approach to training powerful LLMs.
Abstract: Effectively training LLMs for complex, long-CoT reasoning is often bottlenecked by the need for massive high-quality reasoning data. Existing methods are either computationally expensive or fail to reliably distinguish high- from low-quality reasoning samples. To address this, we propose High-Entropy Sum (HES)—a training-free metric that sums only the entropy of the top 0.5\% highest-entropy tokens in each reasoning sequence, focusing on critical forking points to better capture reasoning quality. We validate HES across three mainstream training paradigms: SFT, RFT, and RL. In SFT, training on just the top 20\% of data ranked by HES matches full-dataset performance, while using the lowest-HES data severely degrades it. In RFT, HES-based selection outperforms random baseline. In RL, pairing highest-HES successful trajectories with random failed ones enables the model to learn both strong reasoning patterns and diverse failure modes, significantly surpassing existing training-free selection methods. Our findings establish HES as a robust, training-free metric that enables a unified, data-centric approach to efficiently developing advanced reasoning in LLMs.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8937
Loading