Keywords: Language Models, Knowledge Distillation, Pre-Training
TL;DR: This work introduces an efficient, flexible, and effective knowledge distillation framework for pre-training language models.
Abstract: Knowledge distillation (KD) is widely used to train small, high-performing student language models (LMs) using large teacher LMs.
While effective in fine-tuning, KD during pre-training faces efficiency, flexibility, and effectiveness issues.
Existing methods either incur high computational costs due to online teacher inference, require tokenization matching between teacher and student LMs, or risk losing the difficulty and diversity of the teacher-generated training data.
In this work, we propose **MiniPLM**, a KD framework for pre-training LMs by refining the training data distribution with the teacher LM's knowledge.
For efficiency, MiniPLM performs offline teacher inference, allowing KD for multiple student LMs without adding training costs.
For flexibility, MiniPLM operates solely on the training corpus, enabling KD across model families.
For effectiveness, MiniPLM leverages the differences between large and small LMs to enhance the training data difficulty and diversity, helping student LMs acquire versatile and sophisticated knowledge.
Extensive experiments demonstrate that MiniPLM boosts the student LMs' performance on 9 common downstream tasks, improves language modeling capabilities, and reduces pre-training computation.
The benefit of MiniPLM extends to larger training scales, evidenced by the scaling curve extrapolation.
Further analysis reveals that MiniPLM supports KD across model families and enhances the pre-training data utilization. Our code, data, and models can be found at https://github.com/thu-coai/MiniPLM.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3739
Loading