Reducing the Capacity Gap via Spherical Knowledge DistillationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Knowledge Distillation, Model Compression
TL;DR: This work proposes an efficient knowledge distillation method to train competitive students distilled by oversized teachers.
Abstract: Knowledge distillation aims to obtain a small and effective student model by learning the output from a large knowledgeable teacher model. However, when the student is distilled by an oversized teacher, a critical performance degradation problem is exposed. This paper revisits performance degradation problem from the perspective of model confidence. Specifically, we apply energy-based metrics to measure the confidence of models, and propose Spherical Knowledge Distillation (SKD): a more efficient knowledge distillation framework when distilling with larger teachers. A theoretical analysis is provided to show that SKD can effectively reduce the confidence gap between the teacher and student, thus alleviating the performance degradation problem. We demonstrate that SKD is easy to train, and can significantly outperform several strong baselines on various mainstream datasets, including CIFAR-100 and ImageNet.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
4 Replies

Loading