Keywords: large language models, reasoning
TL;DR: We introduce NaturalReasoning, a 2.8M-question dataset spanning diverse domains, enabling effective knowledge distillation and unsupervised self-training to enhance LLM reasoning capabilities.
Abstract: Scaling reasoning capabilities beyond traditional domains such as math and coding is hindered by the lack of diverse and high-quality questions. To overcome this limitation, we introduce a scalable approach for generating diverse and challenging reasoning questions, accompanied by reference answers. We present NaturalReasoning, a comprehensive dataset comprising 2.8 million questions that span multiple domains, including STEM fields (e.g., Physics, Computer Science), Economics, Social Sciences, and more. We demonstrate the utility of the questions in NaturalReasoning through knowledge distillation experiments which show that NaturalReasoning can effectively elicit and transfer reasoning capabilities from a strong teacher model. Furthermore, we demonstrate that NaturalReasoning is also effective for unsupervised self-training using external reward models or self-rewarding.
Croissant File: json
Dataset URL: https://huggingface.co/datasets/facebook/natural_reasoning
Primary Area: Datasets & Benchmarks for applications in language modeling and vision language modeling
Submission Number: 2285
Loading