Keywords: Generative Domain Adaptation; Image Generation; 3D Generation
Abstract: Recently, generative domain adaptation has achieved remarkable progress, enabling us to adapt a pre-trained generator to a new target domain. However, existing methods are limited to a single target domain and single modality, either text-driven or image-driven. In this paper, we explore a novel task -- $\textit{Generalized Hybrid Domain Adaptation}$. Compared with conventional generative domain adaptation, it provides greater flexibility to adapt the generator to the hybrid of multiple target domains, with multi-modal references including one-shot image and zero-shot text prompt. Meanwhile, it is more challenging to represent the composition of multi-modal target domains and preserve the characteristics from the source domain. To address these issues, we propose UniHDA, a $\textbf{unified}$ and $\textbf{versatile}$ framework for generalized hybrid domain adaptation. Drawing inspiration from the interpolable latent space of StyleGAN, we find that a linear interpolation between domain shifts in CLIP’s embedding space can also uncover favorable compositional capabilities for the adaptation. In light of this finding, we linearly interpolate the domain shifts from multiple target domains to achieve hybrid domain adaptation. To enhance $\textbf{consistency}$ with the source domain, we further propose a novel cross-domain spatial structure (CSS) loss that maintains the detailed spatial structure between the source and target generator. Experiments show the adapted generator can synthesize realistic images with various attribute compositions and maintain robust consistency with the source domain. Additionally, UniHDA is generator-agnostic and versatile to multiple generators, e.g., StyleGAN, EG3D, and video generators.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2824
Loading