Fast and Effective On-Policy Distillation from Reasoning Prefixes

ACL ARR 2026 January Submission6635 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: On-policy knowledge distillation, compute-efficient distillation, chain-of-thought supervision, AI for math
Abstract: On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation. However, OPD requires expensive on-the-fly sampling of the student policy during training, which substantially increases training cost, especially for long responses. Our initial analysis shows that, during OPD, training signals are often concentrated in the prefix of each output, and that even a short teacher-generated prefix can significantly help the student produce the correct answer. Motivated by these observations, we propose a simple yet effective modification of OPD: we apply the distillation objective only to prefixes of student-generated outputs and terminate each sampling early during distillation. Experiments on a suite of AI-for-Math benchmarks show that on-policy prefix distillation achieves performance close to full OPD while reducing training cost by one to two orders of magnitude.
Paper Type: Long
Research Area: LLM Efficiency
Research Area Keywords: distillation,LLM efficiency,reasoning,chain-of-thought,fine-tuning,generalization,mathematical reasoning
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 6635
Loading