Relation-Aware Label Smoothing for Self-KD

Published: 01 Jan 2024, Last Modified: 04 Nov 2024PAKDD (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Knowledge distillation (KD) is widely used to improve models’ performances by transferring a larger teacher’s knowledge to a smaller student model. However, KD has a disadvantage where a pre-trained teacher model is required, which can lead to training inefficiency. Therefore, self-knowledge distillation, enhancing the student by itself, has been proposed. Although self-knowledge distillation shows remarkable performance improvement with fewer resources than conventional teacher-student based KD approaches, existing self-KD methods still require additional time and memory for training. We propose Relation-Aware Label Smoothing for Self-Knowledge Distillation (RAS-KD) that regularizes the student model itself by utilizing the inter-class relationships between class representative vectors with a light-weight auxiliary classifier. Compared to existing self-KD methods that only consider the instance-level knowledge, we show that proposed global-level knowledge is sufficient to achieve competitive performance while being extremely efficient training cost. Also, we achieve extra performance improvement through instance-level supervision. We demonstrate RAS-KD outperforms existing self-KD approaches in various tasks with negligible additional cost.
Loading