AMD: Angular Margin based Knowledge Distillation

TMLR Paper24 Authors

03 Apr 2022 (modified: 28 Feb 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Knowledge distillation as a broad class of methods has led to the development of lightweight and memory efficient models, using a pre-trained model with a large capacity (teacher network) to train a smaller model (student network). Recently, additional variations for knowledge distillation, utilizing activation maps of intermediate layers as the source of knowledge, have been studied. Generally, in computer vision applications it is seen that the feature activation learned by a higher-capacity model contains richer knowledge, highlighting complete objects while focusing less on the background. Based on this observation, we leverage the teacher’s dual ability to accurately distinguish between positive (relevant to the target object) and negative (irrelevant) areas. We propose a new type of distillation, called angular margin-based distillation (AMD). AMD uses the angular distance between positive and negative features by projecting them onto a hypersphere, motivated by the near angular distributions seen in many feature extractors. Then, we create a more attentive feature from encoded knowledge by the angular distance by introducing an angular margin to the positive feature. Transferring such knowledge from the teacher network enables the student model to harness the teacher’s better discrimination of positive and negative features, thus distilling superior student models. The proposed method is evaluated for various student-teacher network pairs on three public datasets. Furthermore, we show that the proposed method has advantages in compatibility with other learning techniques, such as using fine-grained features, augmentation, and other distillation methods.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Based on the reviewer's comments, we reorganized and rewrote several parts of the paper. The paper includes some additional results and more concise explanation of the proposed method.
Assigned Action Editor: ~Jia-Bin_Huang1
Submission Number: 24
Loading