Model Distillation with Knowledge Transfer from Face Classification to Alignment and VerificationDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Knowledge distillation is a potential solution for model compression. The idea is to make a small student network imitate the target of a large teacher network, then the student network can be competitive to the teacher one. Most previous studies focus on model distillation in the classification task, where they propose different architectures and initializations for the student network. However, only the classification task is not enough, and other related tasks such as regression and retrieval are barely considered. To solve the problem, in this paper, we take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification. By selecting appropriate initializations and targets in the knowledge transfer, the distillation can be easier in non-classification tasks. Experiments on the CelebA and CASIA-WebFace datasets demonstrate that the student network can be competitive to the teacher one in alignment and verification, and even surpasses the teacher network under specific compression rates. In addition, to achieve stronger knowledge transfer, we also use a common initialization trick to improve the distillation performance of classification. Evaluations on the CASIA-Webface and large-scale MS-Celeb-1M datasets show the effectiveness of this simple trick.
TL;DR: We take face recognition as a breaking point and propose model distillation with knowledge transfer from face classification to alignment and verification
Keywords: distill, transfer, classification, alignment, verification
Data: [CASIA-WebFace](https://paperswithcode.com/dataset/casia-webface), [CelebA](https://paperswithcode.com/dataset/celeba), [MS-Celeb-1M](https://paperswithcode.com/dataset/ms-celeb-1m)
4 Replies

Loading