Progressive Knowledge Distillation For Generative Modeling

Sep 25, 2019 ICLR 2020 Conference Withdrawn Submission readers: everyone
  • TL;DR: This paper introduces progressive knowledge distillation for learning generative models that are recognition task oriented
  • Abstract: While modern generative models are able to synthesize high-fidelity, visually appealing images, successfully generating examples that are useful for recognition tasks remains an elusive goal. To this end, our key insight is that the examples should be synthesized to recover classifier decision boundaries that would be learned from a large amount of real examples. More concretely, we treat a classifier trained on synthetic examples as ''student'' and a classifier trained on real examples as ''teacher''. By introducing knowledge distillation into a meta-learning framework, we encourage the generative model to produce examples in a way that enables the student classifier to mimic the behavior of the teacher. To mitigate the potential gap between student and teacher classifiers, we further propose to distill the knowledge in a progressive manner, either by gradually strengthening the teacher or weakening the student. We demonstrate the use of our model-agnostic distillation approach to deal with data scarcity, significantly improving few-shot learning performance on miniImageNet and ImageNet1K benchmarks.
  • Keywords: knowledge distillation, generative modeling, deep learning
0 Replies

Loading