Randomized Adversarial Style Perturbations for Domain GeneralizationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Domain Generalization, Data Augmentation, Adversarial Attacks
Abstract: While deep neural networks have shown remarkable progress in various computer vision tasks, they often suffer from weak generalization ability on unseen domains. To tackle performance degradation under such domain shifts, Domain Generalization (DG) aims to learn domain invariant features applicable to unseen target domains based only on the data in source domains. This paper presents a simple yet effective approach for domain generalization via style perturbation using adversarial attacks. Motivated by the observation that the characteristics of each domain are captured by the feature statistics corresponding to style, we propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbations (RASP). The proposed algorithm augments the styles of features to deceive the network outputs towards randomly selected labels during training and prevents the network from being misled by the unexpected styles observed in unseen target domains. While RASP is effective to handle domain shifts, its naïve integration into the training procedure might degrade the capability of learning knowledge from source domains because it has no restriction on the perturbations of representations. This challenge is alleviated by Normalized Feature Mixup (NFM), which facilitates learning the original features while achieving robustness to perturbed representations via their mixup during training. We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization ability, especially in large-scale benchmarks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading