Generative and Explainable Data Augmentation for Single-Domain Generalization

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Domain generalization, data augmentation, contrastive learning, generative model, model interpretation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: In this work, we propose Generative and Explainable Adversarial Data Augmentation (GEADA), a novel framework designed to tackle the single-domain generalization challenge in image classification. The framework consists of two competing components: an augmentor to synthesize diverse yet semantically consistent augmentations, and a projector to learn domain-invariant representations from the augmented samples. The augmentor leverages a generative network for style transformations and an attribution-based cropping module for explainable geometric augmentations. We further incorporate theoretically-grounded contrastive loss functions, inspired by the geometric properties of unit hyperspheres, to promote the diversity of generated augmentations and the robustness of learned representations. Extensive experiments on multiple standard domain generalization benchmarks demonstrate the effectiveness of our approach against domain shifts.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3087
Loading