Fairness-Preserving Regularizer: Balancing Core and Spurious Features

ICML 2023 Workshop SCIS Submission95 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: Spurious correlation, empirical risk minimization, OOD generalization
TL;DR: Fairly balance both core and spurious features as a compromise when prior knowledge for identifying them is absent.
Abstract: Real world visual data contains multiple attributes, e.g., color, shape, foreground, background, etc. To solve a specific learning task, machine learning models should use a specific set of attributes. In principle, selecting which set of attributes as the core (non-spurious) ones is determined by the task regardless of how heavily other attributes are (spuriously) correlated with the label. Without prior knowledge for identifying the core attribute or spurious one, we can hardly tell a learned correlation is spurious or not in real-world scenarios. In this work, we dive into this realistic setting and since there is no prior knowledge to determine which feature is core or spurious, we aim to learn a regularized predictor to fairly balance both core and spurious features. To achieve this, we start by formalizing fairness of learned features in a linear predictor under multi-view data distribution assumption (Allen-Zhu & Li, 2023). We prove that achieving this fairness can be bounded by a simple regularization term and finally design fairness-preserving regularizer. Experiments on Waterbirds, CelebA and Wilds-FMOW datasets validate the effectiveness of our method.
Submission Number: 95
Loading