Towards Diverse and Faithful One-shot Adaption of Generative Adversarial NetworksDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 21 Oct 2022, 14:48NeurIPS 2022 AcceptReaders: Everyone
Keywords: StyleGAN, Domain Adaption, One-shot, CLIP
TL;DR: We presented a method DiFa to address the diverse generation and faithful adaptation issues for one-shot generative domain adaption.
Abstract: One-shot generative domain adaption aims to transfer a pre-trained generator on one domain to a new domain using one reference image only. However, it remains very challenging for the adapted generator (i) to generate diverse images inherited from the pre-trained generator while (ii) faithfully acquiring the domain-specific attributes and styles of the reference image. In this paper, we present a novel one-shot generative domain adaption method, i.e., DiFa, for diverse generation and faithful adaptation. For global-level adaptation, we leverage the difference between the CLIP embedding of the reference image and the mean embedding of source images to constrain the target generator. For local-level adaptation, we introduce an attentive style loss which aligns each intermediate token of an adapted image with its corresponding token of the reference image. To facilitate diverse generation, selective cross-domain consistency is introduced to select and retain domain-sharing attributes in the editing latent $\mathcal{W}+$ space to inherit the diversity of the pre-trained generator. Extensive experiments show that our method outperforms the state-of-the-arts both quantitatively and qualitatively, especially for the cases of large domain gap. Moreover, our DiFa can easily be extended to zero-shot generative domain adaption with appealing results.
Supplementary Material: pdf
19 Replies