Mitigating Simplicity Bias in Deep Learning for Improved OOD Generalization and Robustness

ICML 2023 Workshop SCIS Submission38 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Simplicity Bias, Spurious Features, OOD Generalization, Subgroup Robustness
TL;DR: We propose a framework to mitigate simplicity bias in neural networks to encourage the use of a diverse set of features, leading to improved subgroup robustness, out-of-distribution generalization and fairness.
Abstract: Neural networks are known to exhibit simplicity bias (SB) where they tend to prefer learning 'simple' features over more 'complex' ones, even when the latter may be more informative. SB can lead to the model making biased predictions which have poor out-of-distribution (OOD) generalization and robustness. To address this, we propose a framework that encourages the model to use a more diverse set of features to make predictions. We first train a simple model, and then regularize the conditional mutual information with respect to it to obtain the final model. We demonstrate the effectiveness of this framework in various problem settings and real-world applications, showing that it effectively addresses SB, and enhances OOD generalization, sub-group robustness and fairness. We complement these results with theoretical analyses of the effect of the regularization and its OOD generalization properties.
Submission Number: 38
Loading