The Valley of Code Reasoning: Scaling Knowledge Distillation of Large Language Models

Published: 22 Sept 2025, Last Modified: 25 Nov 2025DL4C @ NeurIPS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Reasoning distillation, Scaling laws, training paradigm, code reasoning
TL;DR: We study the training paradigm for code reasoning distillation in LLMs and find that for small sized models, there is a dip in performance (valley) and is correlated with the model learning the reasoning structure.
Abstract: Distilling the thinking traces of a Large Language Model (LLM) with reasoning capabilities into a smaller model has been proven effective. Yet, there is a scarcity of work done on how model performances scale with the quantity of distillation data. In this work, we study the scaling trend of distilling competitive coding skills on two small non-reasoning LLMs. We validate the hypothesis that there is a valley of code reasoning: downstream performance on competitive coding first drops as data quantity increases, then it steadily increases in a log-linear fashion. Having identified the trend, we further finetune the models at two different distillation stages on the same data to ground conclusions on their respective learning phase. We learn that across stages in the low and medium-low data regimes, small models benefit significantly from easier coding questions than from harder ones. We also find that surprisingly, the correctness of outputs in training data makes no difference to distillation outcomes. Our work represents a step forward understanding the training dynamics of code reasoning distillation outside intuition.
Submission Number: 63
Loading