The Quest for Efficient Reasoning: A Data-Centric Benchmark to CoT Distillation

Published: 26 Jan 2026, Last Modified: 13 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Chain-of-Thought, Knowledge Distillation, Large Language Models, Benchmarking, Data Augmentation, Data Selection, Data Mixing
TL;DR: DC-CoT is the first benchmark for data-centric CoT distillation, testing how augmentation, selection, and mixing impact student LLMs. It evaluates multiple teachers and students across multimodel reasoning tasks, focusing on IID, OOD, and transfer.
Abstract: Data-centric distillation, including data augmentation, selection, and mixing, offers a promising path to creating smaller, more efficient student Large Language Models (LLMs) that retain strong reasoning abilities. However, there still lacks a comprehensive benchmark to systematically assess the effect of each distillation approach. This paper introduces DC-CoT, the first data-centric benchmark that investigates data manipulation in chain-of-thought (CoT) distillation from method, model and data perspectives. Utilizing various teacher models (e.g., o4-mini, Gemini-Pro, Claude-3.5) and student architectures (e.g., 3B, 7B parameters), we rigorously evaluate the impact of these data manipulations on student model performance across multiple reasoning datasets, with a focus on in-distribution (IID) and out-of-distribution (OOD) generalization, and cross-domain transfer. Our findings aim to provide actionable insights and establish best practices for optimizing CoT distillation through data-centric techniques, ultimately facilitating the development of more accessible and capable reasoning models. The nonymous codebase can be accessed https://anonymous.4open.science/r/DC-COT-FF4C/
Primary Area: datasets and benchmarks
Submission Number: 13389
Loading