Language-Guided Diffusion for Domain Generalization

Published: 06 Mar 2025, Last Modified: 01 May 2025SCSL @ ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Track: regular paper (up to 6 pages)
Keywords: Domain Generalization, Diffusion Models, LLM
Abstract: Domain generalization (DG) addresses the challenge of training machine learning models that generalize effectively to unseen target domains exhibiting distributional shifts. Traditional data augmentation techniques, while useful, often fail to adequately simulate the novel domain characteristics necessary for robust DG. We introduce a novel data augmentation framework leveraging the synergistic power of Large Language Models (LLMs) and diffusion models to generate diverse and realistic training data for DG. Our method employs LLMs to create creative prompts that encapsulate new domain styles, which are then used by diffusion models to synthesize high-fidelity images representative of these unseen domains. Furthermore, we integrate a CLIP-guided diversity analysis to ensure that the generated data effectively enhances model generalization while maintaining computational efficiency. Experiments on the PACS dataset show that our method significantly outperforms traditional techniques.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 34
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview