Fairness through partial awareness: Evaluation of the addition of demographic information for bias mitigation methods

Published: 28 Jun 2024, Last Modified: 25 Jul 2024NextGenAISafety 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Learning Fairness, Robustness, Data Scarcity, Bias Mitigation, Proxy Fairness
Abstract: Models that effectively mitigate demographic biases have been explored in two common settings: either requiring full access to demographic information in training or omitting demographic information for legal or privacy reasons. Yet in practice, data can be collected in stages or composed of different sources, so data access can be rather flexible, instead of following the two extremes of complete or a lack of access to demographic annotations. We investigate the fairness impact of disclosing more demographic information and find that demographic-unaware methods come at a clear cost to certain fairness metrics in comparison to demographic-aware methods. We then empirically show the benefits of a partially-demographic-aware setup: collecting only a small number of new samples (0.1\% of the full set) with demographics for an over-parameterized model can significantly amend this cost (40\% gain in worst-group accuracy). Our findings illustrate that simple data collection efforts may effectively close fairness gaps for models trained on data without demographic information.
Submission Number: 107
Loading