Few-Shot Knowledge Distillation for Language Models via Counterfactual Explanations

Published: 29 Sept 2025, Last Modified: 12 Oct 2025NeurIPS 2025 - Reliable ML WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Knowledge distillation, LLM, Few-shot Data, Data scarcity
Abstract: Knowledge distillation is a promising approach to transfer capabilities from resource-intensive teacher models to smaller, resource-efficient student models that can be deployed easily, particularly in task-aware scenarios. However, existing methods of task-aware distillation typically require substantial quantities of data which may be unavailable or expensive to obtain in many practical scenarios. In this paper, we address this challenge by introducing a novel strategy called \ours for \emph{few-shot task-aware knowledge distillation by systematically infusing counterfactual explanations}. Counterfactual explanations (CFE) refer to inputs that can flip the output prediction of the teacher model with minimum input perturbation. Our strategy \ours, short for \textbf{Co}unterfactual-explanation-infused \textbf{D}istillation leverages these CFEs to precisely map the teacher's decision boundary with significantly fewer samples. We provide theoretical guarantees for motivating the role of CFEs in distillation, from both statistical and geometric perspectives. We mathematically show that CFEs can improve parameter estimation by providing more informative examples near the teacher’s decision boundary. We also derive geometric insights on how CFEs effectively act as \emph{knowledge probes}, helping the students mimic the teacher's decision boundaries more effectively than standard data. We perform experiments across various datasets and LLMs to show that \ours outperforms standard distillation approaches in few-shot regimes (8 - 512 samples), achieving improved performance under equal number of shots which is essentially half of the original samples used by the baselines, infused with their corresponding CFEs.
Submission Number: 134
Loading