Abstract: Highlights•This paper proposes novel adversarial self-knowledge distillation, named AI-KD.•By adversarial learning, the student model aligns with logits of the pre-trained.•For training stability, AI-KD distills knowledge of deterministic and progressive.•AI-KD records outstanding performance on various fine-grained image datasets.
Loading