Randomized Adversarial Style Perturbations for Domain Generalization

Published: 01 Jan 2024, Last Modified: 10 Oct 2024WACV 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP), which is motivated by the observation that the characteristics of each domain are captured by the feature statistics corresponding to its style. The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class. By incorporating the perturbed styles into training, we prevent the model from being misled by the unexpected styles observed in unseen target domains. While RASP is effective for handling domain shifts, its naïve integration into the training procedure is prone to degrade the capability of learning knowledge from source domains due to the feature distortions caused by style perturbation. This challenge is alleviated by Normalized Feature Mixup (NFM) during training, which facilitates learning the original features while achieving robustness to perturbed representations. We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
Loading