Don't Ignore the Tail: Decoupling top-$K$ Probabilities for Efficient Language Model Distillation

16 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Lamguage Model, Knowledge Distillation, Dark Knowledge, Qwen, Phi2/3, Llama-2, GSM8K
Abstract: The core learning signal used in language model distillation is the standard Kullback-Leibler (KL) divergence between the distribution of the student and the teacher. Traditional KL divergence tends to be dominated by the teacher’s highest-probability modes, thus diminishing the influence of less probable yet potentially informative components of the output distribution. We propose a new tail-aware divergence that decouples the contribution of the teacher model's top-$K$ predicted probabilities from those with lower probabilities, while maintaining the same computational profile as the KL Divergence. Our decoupled approach reduces the impact of the teacher modes and, consequently, increases the contribution of the tail of the distribution. Experimental results demonstrate that our modified distillation method yields competitive performance in both pre-training and supervised distillation of decoder models across various datasets. Furthermore, the distillation process is efficient and can be performed using a modest academic budget for large datasets, eliminating the need for industry-scale computing capabilities.\footnote{We used LLMs like Grammarly and ChatGPT-Plus to check grammar and spelling and to polish our work.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 7906
Loading