Exploiting Knowledge Distillation for Few-Shot Image GenerationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: few-show image generation, knowledge distillation
Abstract: Few-shot image generation, which trains generative models on limited examples, is of practical importance. The existing pipeline is first pretraining a source model (which contains a generator and a discriminator) on a large-scale dataset and finetuning it on a target domain with limited samples. The main challenge is that the few-shot model easily becomes overfitting. It can be attributed to two aspects: the lack of sample diversity for the generator and the failure of fidelity discrimination for the discriminator. In this paper, we treat the diversity and fidelity in the source model as a kind of knowledge and propose to improve the generation results via exploring knowledge distillation. The source model trained on the large-scale dataset is regarded as teacher model and the target model (student) is learned by introducing momentum relation distillation module to produce diverse samples and source discrimination distillation to ensure the fidelity discrimination. With the momentum relation distillation and source discrimination distillation modules, the proposed method outperforms the state-of-the-art of by a large margin, i.e., 10% for FFHQ to Sketches, while achieving better diversity.
One-sentence Summary: We exploit knowledge distillation in few-shot image generation to improve both the diversity and fidelity.
4 Replies

Loading