Abstract: Deep neural networks often rely on spurious features irrelevant to ground truth during training, which results in poor performance in unseen domains. Recent Domain Generalization (DG) approaches try to make networks robust by assuming specific biases (e.g., bias towards texture). However, such strategies considering pre-defined biases struggle against various spurious features caused by numerous factors like background and texture. In this paper, we focus on the influence of spurious features on networks, rather than assuming specific biases. We introduce a novel concept, Spuriosity Bias, representing the extent to which networks are biased towards spurious features in each domain. We then propose Dynamic Spuriosity Bias Harmonizer (DSBH) that flexibly inhibits Spuriosity Bias to adjust network parameters. DSBH equips with two networks, one focusing on network bias and the other learning domain-invariant features. The Spuriosity Bias is derived based on the difference between the two networks’ gradients and logits. Across multiple popular DG benchmarks, DSBH outperforms SOTA methods and exhibits remarkably stable accuracy curves on unseen test domains. Our code is available at: https://github.com/ByeongtaePark/DSBH.
External IDs:dblp:conf/pakdd/ParkLKC25
Loading