Knowledge Distillation as Semiparametric InferenceDownload PDF

28 Sep 2020 (modified: 14 Jan 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: knowledge distillation, semiparametric inference, generalization bounds
  • Abstract: A popular approach to model compression is to train an inexpensive student model to mimic the class probabilities of a highly accurate but cumbersome teacher model. Surprisingly, this two-step knowledge distillation process often leads to higher accuracy than training the student directly on labeled data. To explain and enhance this phenomenon, we cast knowledge distillation as a semiparametric inference problem with the optimal student model as the target, the unknown Bayes class probabilities as nuisance, and the teacher probabilities as a plug-in nuisance estimate. By adapting modern semiparametric tools, we derive several new guarantees for the prediction error of standard distillation and develop several enhancements with improved guarantees. We validate our findings empirically on both tabular data and image data and observe consistent improvements from our knowledge distillation enhancements.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • One-sentence Summary: Viewing knowledge distillation as a semiparametric inference problem leads to improved generalization guarantees of the distillation process
  • Supplementary Material: zip
11 Replies

Loading