DomainFusion: Generalizing To Unseen Domains with Latent Diffusion Models

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Domain Generalization, Diffusion Model
Abstract: Latent diffusion model(LDM) has achieved success in various tasks beyond image generation due to its large-scale image-text training datasets and high-quality generation capability. However, its application in image classification remains unclear. Existing approaches directly transform LDM into discriminative models, which involve using mismatched text-image pairs that LDM fail to present accurate estimation, resulting in degraded performance. Other methods that extract vision knowledge are only designed for generative tasks. Additionally, domain generalization (DG) still faces challenges due to the scarcity of labeled cross-domain data. Existing data-generation approaches suffer from limited performance, and how to immigrate LDM to DG remains unknown. Therefore, we concern these two issues and propose a framework DomainFusion, which leverages LDM in both latent level and pixel level for DG classification. In latent level, we propose Gradient Score Distillation(GSD) which distills gradient priors from LDM to guide the optimization of the DG model. We further theoretically proved it can optimize the KL divergence between the predicted distributions of LDM and the DG model. In pixel level, we propose an autoregressive generation method to shuffle synthetic samples and a sampling strategy to optimize the semantic and non-semantic factors for synthetic samples. Experimental results demonstrate that DomainFusion surpasses data-generation methods a lot and achieves state-of-the-art performance on multiple benchmark datasets.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3632
Loading