Abstract: Knowledge distillation has proven to be an effective technique in improving the
performance a student model using predictions from a teacher model. However,
recent work has shown that gains in average efficiency are not uniform across
subgroups in the data, and in particular can often come at the cost of accuracy
on rare subgroups and classes. To preserve strong performance across classes
that may follow a long-tailed distribution, we develop distillation techniques that
are tailored to improve the student’s worst-class performance. Specifically, we
introduce robust optimization objectives in different combinations for the teacher
and student, and further allow for training with any tradeoff between the overall
accuracy and the robust worst-class objective. We show empirically that our robust
distillation techniques not only achieve better worst-class performance, but also
lead to Pareto improvement in the tradeoff between overall performance and worst-class
performance compared to other baseline methods. Theoretically, we provide
insights into what makes a good teacher when the goal is to train a robust student.
0 Replies
Loading