Learning Intrinsic Invariance Within Intra-Class for Domain Generalization

Published: 01 Jan 2025, Last Modified: 04 Nov 2025IEEE Trans. Multim. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning methods often struggle with the domain shift problem, leading to poor generalization on out-of-domain (OOD) data. To address the problem, domain generalization (DG) has been proposed to leverage the source domains to train a model that can generalize to OOD data. Existing domain generalization methods primarily focus on learning domain invariance, but they fail to ensure proximity among samples within the same category when domains are aligned for domain-invariant learning. Consequently, their generalization performance remains suboptimal. In this paper, we propose a novel approach to address this issue by iteratively approximating the category domain-invariant distribution from all domains. Our method involves an iterative loop where we initially estimate the domain-invariant distribution for each category by averaging the statistical characteristics across all domains. Then the adversarial perturbation alignment is adopted to keep each sample close to its corresponding category domain-invariant distribution. With the iterative loop, the deep network is optimized for robust domain invariance learning. Extensive experiments demonstrate that our proposed method consistently outperforms state-of-the-art approaches across various scenarios.
Loading