Efficient Knowledge Injection in LLMs via Self-Distillation

TMLR Paper4524 Authors

20 Mar 2025 (modified: 02 Apr 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In many practical applications, large language models (LLMs) need to acquire new knowledge not present in their pre-training data. Efficiently leveraging this knowledge usually relies on supervised fine-tuning or retrieval-augmented generation (RAG). Although RAG has emerged as the industry standard for knowledge injection, fine-tuning has not yet achieved comparable success. This paper proposes utilizing prompt distillation, a self-distillation-based method previously explored primarily for style alignment and instruction tuning, to internalize new factual knowledge from free-form documents. Unlike prior methods, our approach requires neither larger teacher models nor structured knowledge formats. Across multiple LLM sizes and model families, we show that prompt distillation outperforms standard supervised fine-tuning and can even surpass RAG. We analyze the key factors contributing to prompt distillation's effectiveness and examine how it scales.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=3VVSMbx5so
Changes Since Last Submission: Fixed font to ensure adherence to format and compressed down to regular submission length, 12 pages.
Assigned Action Editor: ~Alessandro_Sordoni1
Submission Number: 4524
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview