Abstract: In this paper, we propose a novel and efficient method for knowledge distillation, which is structurally simple and requires negligible computation overhead. Our method includes three modules. The first module is the calibrated mask, which avoids the teacher model’s incorrect representation to disturb the student model’s training; the second module and the third module improve the performance of the student model by the similarity of the sample and the process, respectively. The student model attains better performance in qualitative and quantitative evaluation through the judicious amalgamation of these three modules. Our method is experimented with through rigorous validation of canonical datasets, including CIFAR-100 and TinyImageNet. The experimental corroboration conclusively attests to the better performance of our method, soaring above the extant most state-of-the-art on both subjective and objective dimensions.
Loading