Progressive Knowledge Distillation For Generative ModelingDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
TL;DR: This paper introduces progressive knowledge distillation for learning generative models that are recognition task oriented
Abstract: While modern generative models are able to synthesize high-fidelity, visually appealing images, successfully generating examples that are useful for recognition tasks remains an elusive goal. To this end, our key insight is that the examples should be synthesized to recover classifier decision boundaries that would be learned from a large amount of real examples. More concretely, we treat a classifier trained on synthetic examples as ''student'' and a classifier trained on real examples as ''teacher''. By introducing knowledge distillation into a meta-learning framework, we encourage the generative model to produce examples in a way that enables the student classifier to mimic the behavior of the teacher. To mitigate the potential gap between student and teacher classifiers, we further propose to distill the knowledge in a progressive manner, either by gradually strengthening the teacher or weakening the student. We demonstrate the use of our model-agnostic distillation approach to deal with data scarcity, significantly improving few-shot learning performance on miniImageNet and ImageNet1K benchmarks.
Keywords: knowledge distillation, generative modeling, deep learning
Original Pdf: pdf
4 Replies

Loading