Abstract: In realistic scenarios, the effectiveness of Deep Neural Networks is hindered by domain shift, where discrepancies between training (source) and testing (target) domains lead to poor generalization on previously unseen data. The Domain Generalization (DG) paradigm addresses this challenge by developing a general model that relies solely on source domains, aiming for robust performance in unknown domains. Despite the progress of prior augmentation-based methods by introducing more diversity based on the known distribution, DG still suffers from overfitting due to limited domain-specific information. Therefore, unlike prior DG methods that treat all parameters equally, we propose a Gradient-Aware Domain-Invariant Learning mechanism that adaptively recognizes and emphasizes domain-invariant parameters. Specifically, two novel models named Domain Decoupling and Combination and Domain-Invariance-Guided Backpropagation (DIGB) are introduced to first generate contrastive samples with the same domain-invariant features and then selectively prioritize parameters with unified optimization directions across contrastive sample pairs to enhance domain robustness. Additionally, a sparse version of DIGB achieves a trade-off between performance and efficiency. Our extensive experiments on various domain generalization benchmarks demonstrate that our proposed method achieves state-of-the-art performance with strong generalization capabilities.
Loading