Identifying Spurious Correlations Early in Training through the Lens of Simplicity Bias

ICLR 2024 Workshop DMLR Submission75 Authors

Published: 04 Mar 2024, Last Modified: 02 May 2024DMLR @ ICLR 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: spurious correlations, simplicity bias
Abstract: Neural networks trained with (stochastic) gradient descent have an inductive bias towards learning simpler solutions. This makes them highly prone to learning spurious correlations in the training data, that may not hold at test time. In this work, we provide the first theoretical analysis of the effect of simplicity bias on learning spurious correlations. Notably, we show that examples with spurious features are provably separable based on the model’s output early in training. We further illustrate that if spurious features have a small enough noise-to-signal ratio, the network’s output on majority of examples is almost exclusively determined by the spurious features, leading to poor worst-group test accuracy. Finally, we propose SPARE, which identifies spurious correlations early in training, and utilizes importance sampling to alleviate their effect. Empirically, we demonstrate that SPARE outperforms state-of-the-art methods by up to $21.1$% in worst-group accuracy, while being up to $12\times$ faster. We also show the applicability of SPARE, as a highly effective but lightweight method, to *discover spurious correlations*.
Primary Subject Area: Impact of data bias, variance, and drifts
Paper Type: Extended abstracts: up to 2 pages
Participation Mode: Virtual
Confirmation: I have read and agree with the workshop's policy on behalf of myself and my co-authors.
Submission Number: 75
Loading