Teacher’s pet: understanding and mitigating biases in distillation

Published: 16 Nov 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Knowledge distillation is widely used as a means of improving the performance of a relatively simple ``student'' model using the predictions from a complex ``teacher'' model. Several works have shown that distillation significantly boosts the student's \emph{overall} performance; however, are these gains uniform across all data subgroups? In this paper, we show that distillation can \emph{harm} performance on certain subgroups, {e.g., classes with few associated samples}, compared to the vanilla student trained using the one-hot labels. We trace this behaviour to errors made by the teacher distribution being transferred to and \emph{amplified} by the student model, and formally prove that distillation can indeed harm underrepresented subgroups in certain regression settings. To mitigate this problem, we present techniques which soften the teacher influence for subgroups where it is less reliable. Experiments on several image classification benchmarks show that these modifications of distillation maintain boost in overall accuracy, while additionally ensuring improvement in subgroup performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Hanwang_Zhang3
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 291