Keywords: Large language model, Long chain-of-thought, Distillation
Abstract: Efficient reasoning distillation for long chain-of-thought (CoT) models is increasingly constrained by the assumption of a single oracle teacher, despite the practical
availability of multiple candidate teachers and growing CoT corpora. We revisit
teacher selection and observe that different students have different “best teachers,”
and even for the same student, the best teacher can vary across datasets. Therefore, to unify multiple teachers’ reasoning abilities into a student to overcome conflicts among various teachers’ supervision, we propose Merge-of-Thought Distillation (MoT), a lightweight framework that alternates between teacher-specific
supervised fine-tuning branches and weight-space merging of the resulting student variants. On competition math benchmarks, using only about 200 CoT samples, applying MoT to a Qwen3-14B student surpasses strong models including
Deepseek-R1, Qwen3-32B, and OpenAI-O1, demonstrating substantial gains. Besides, MoT consistently outperforms the best single-teacher distillation, improves
general reasoning beyond mathematics while reducing catastrophic forgetting, and
shows robustness to distribution-shifted and peer-level teachers. Finally, we have
demonstrated MoT possesses consensus CoT by eliminating teacher-specific inductive biases and inter-teacher conflicts while repeatedly reinforcing the learning
of consensus reasoning features. These results position MoT as a simple, effective route to efficiently distilling long CoT capabilities from diverse teachers into
compact students.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 8687
Loading