One Sample to Rule Them All: Extreme Data Efficiency in RL Scaling

Published: 16 Oct 2025, Last Modified: 10 Nov 2025NeurIPS 2025 ER WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reinforcement Learning, Multidisciplinary Reasoning, Data Efficiency
TL;DR: We find that one well selected or designed math sample can better elicit multidisciplinary reasoning ability of large language models. compared to training with datasets of magnitudes larger.
Abstract: The reasoning ability of large language models (LLMs) can be unleashed with reinforcement learning (RL) [OpenAI, 2024, DeepSeek-AI et al., 2025a, Zeng et al.,2025]. The success of existing RL attempts in LLMs usually relies on high-quality samples of thousands or beyond. In this paper, we challenge fundamental assumptions about data requirements in RL for LLMs by demonstrating the remarkable effectiveness of one-shot learning. Specifically, we introduce polymath learning, a framework for designing one training sample that elicits multidisciplinary impact. We present three key findings: (1) A single, strategically selected math reasoning sample can produce significant performance improvements across multiple domains, including physics, chemistry, and biology with RL; (2) The math skills salient to reasoning suggest the characteristics of the optimal polymath sample; and (3) An engineered synthetic sample that integrates elements from multiple subjects outperforms training with individual samples that naturally occur. Our approach achieves superior performance to training with larger datasets across various reasoning benchmarks, demonstrating that sample quality and design, rather than quantity, may be the key to unlock enhanced reasoning capabilities in language models. Our results suggest a shift, dubbed as sample engineering, toward precision engineering of training samples rather than simply increasing data volume.
Submission Number: 159
Loading