Knowledge Distillation ≈ Label Smoothing: Fact or Fallacy?

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Short Paper
Submission Track: Efficient Methods for NLP
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: knowledge distillation, label smoothing, regularization, interpretation of knowledge distillation
TL;DR: Knowledge distillation has been claimed by some to be a form of label smoothing regularization; we present evidence against that claim in this paper.
Abstract: Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation (KD) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing (LS). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a) In most settings, KD and LS drive model confidence in completely opposite directions, and (b) In KD, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.
Submission Number: 3813
Loading