ADIFT: Zero-Shot Generative Model Adaption Via Adaptive Domain-Invariant Feature Transfer

Published: 01 Jan 2024, Last Modified: 01 Oct 2024ICASSP 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: CLIP-guided zero-shot image generative model adaption methods only require textual domain labels without any target domain images, but there are some dilemmas remain unsolved, such as identity degradation and pattern overfitting. To address these issues, an adaptive domain-invariant feature transfer (ADIFT) method is proposed. It makes the target domain generator learn domain-invariant features from the source domain generator but learn domain-variant features from the CLIP space. We first introduce a local self-similarity map to represent and preserve the image identity features, and then add a parameter learnable point-wise gate module on the alignment path of the local self-similarity maps to transfer cross-domain features adaptively. Qualitative and quantitative experimental results validate that the proposed ADIFT solves the problems of identity degradation and pattern over-fitting effectively.
Loading