Improving Subgroup Robustness via Data Selection

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: group robustness, fairness, data attribution, machine learning
TL;DR: Improving model performance on under-represented subpopulations by removing harmful training data.
Abstract: Machine learning models can often fail on subgroups that are underrepresented during training. While dataset balancing can improve performance on underperforming groups, it requires access to training group annotations and can end up removing large portions of the dataset. In this paper, we introduce Data Debiasing with Datamodels (D3M), a debiasing approach which isolates and removes specific training examples that drive the model's failures on minority groups. Our approach enables us to efficiently train debiased classifiers while removing only a small number of examples, and does not require training group annotations or additional hyperparameter tuning.
Supplementary Material: zip
Primary Area: Safety in machine learning
Submission Number: 11410
Loading