When Does Reasoning Matter? A Controlled Study of Reasoning’s Contribution to Model Performance

ICLR 2026 Conference Submission17338 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reasoning, Knowledge Distillation, Efficiency–Performance Trade-off, Instruction Fine-Tuning, Supervised Fine-Tuning, Controlled Comparative Study, Training and Inference Costs
TL;DR: We present a controlled study comparing reasoning distillation and instruction fine-tuning, analyzing their performance–cost trade-offs at training and inference time across model scales and tasks.
Abstract: Large Language Models (LLMs) with reasoning capabilities have achieved state-of-the-art performance on a wide range of tasks. Despite its empirical success, the tasks and model scales at which reasoning becomes effective, as well as its training and inference costs, remain underexplored. In this work, we rely on a synthetic data distillation framework to conduct a large-scale supervised study. We compare Instruction Fine-Tuning (IFT) and reasoning models of varying sizes, on a wide range of math-centric and general-purpose tasks, evaluating both multiple-choice and open-ended formats. Our analysis reveals that reasoning consistently improves model performance, often matching or surpassing significantly larger IFT systems. Notably, while IFT remains Pareto-optimal in training and inference costs, reasoning models become increasingly valuable as model size scales, overcoming IFT performance limits on reasoning-intensive and open-ended tasks.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 17338
Loading