Keywords: generative domain adaptation
Abstract: This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain $\textit{referring to as few as just one target image}$. The main challenge is that, under limited supervision, it is extremely difficult to synthesize photo-realistic and highly diverse images, while acquiring representative characters of the target. Different from existing approaches that adopt the vanilla fine-tuning strategy, we import two lightweight modules to the generator and the discriminator respectively. Concretely, we introduce an $\textit{attribute adaptor}$ into the generator yet freeze its original parameters, through which it can reuse the prior knowledge to the most extent and hence maintain the synthesis quality and diversity. We then equip the well-learned discriminator backbone with an $\textit{attribute classifier}$ to ensure that the generator captures the appropriate characters from the reference. Furthermore, considering the poor diversity of the training data ($\textit{i.e.}$, as few as only one image), we propose to also constrain the diversity of the generative domain in the training process, alleviating the optimization difficulty. Our approach brings appealing results under various settings, $\textit{substantially}$ surpassing state-of-the-art alternatives, especially in terms of synthesis diversity. Noticeably, our method works well even with large domain gaps, and robustly converges $\textit{within a few minutes}$ for each experiment.
One-sentence Summary: This work aims at transferring a Generative Adversarial Network (GAN) pre-trained on one image domain to a new domain referring to as few as just one target image.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:2111.09876/code)
17 Replies
Loading