Mining GANs for knowledge transfer to small domainsDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Abstract: One of the attractive characteristics of deep neural networks is their ability to transfer knowledge obtained in one domain to other related domains. As a result, high-quality networks can be trained in domains with relatively little training data. This property has been extensively studied for discriminative networks but has received significantly less attention for generative models. Given the often enormous effort which is required to train GANs, both computationally as well as in the collection of datasets, the re-use of pretrained GANs is a desirable objective. Therefore, we investigate various scenarios of knowledge transfer for generative models and propose methods to mine the knowledge that is most beneficial to a specific target domain from a single or multiple pretrained GANs. This is done using a miner network that identifies which part of the generative distribution of the pretrained GAN outputs samples closest to the target domain. In the multiple GAN case, We also train a selector to learn a prior over the available pretrained GANs. We show that both the selector and the miner can be trained by applying a selective backpropagation procedure on the critic output. We perform experiments on several complex datasets using various GAN architectures (BigGAN, Progressive GAN) and show that the proposed method, called MineGAN, effectively transfers knowledge to small domains, outperforming existing methods. In addition, MineGAN can successfully transfer knowledge from multiple pretrained GANs.
Keywords: Generative adversarial networks, transferring learning, small domains, deep Learning
Original Pdf: pdf
4 Replies

Loading