On the Inductive Biases of Demographic Parity-based Fair Learning Algorithms

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Fairness in Machine Learning, Demographic Parity, Inductive Bias
Abstract: Fair supervised learning algorithms assigning labels with little dependence on a sensitive attribute have attracted great attention in the machine learning community. While the demographic parity (DP) notion has been frequently used to measure a model's fairness in training fair classifiers, several studies in the literature suggest potential impacts of enforcing DP in fair learning algorithms. In this work, we analytically study the effect of standard DP-based regularization methods on the conditional distribution of the predicted label given the sensitive attribute. Our analysis shows that an imbalanced training dataset with a non-uniform distribution of the sensitive attribute could lead to a classification rule biased toward the sensitive attribute outcome holding the majority of training data. To control such inductive biases in DP-based fair learning, we propose a sensitive attribute-based distributionally robust optimization (SA-DRO) method improving robustness against the marginal distribution of the sensitive attribute. Finally, we present several numerical results on the application of DP-based learning methods to standard centralized and distributed learning problems. The empirical findings support our theoretical results on the inductive biases in DP-based fair learning algorithms and the debiasing effects of the proposed SA-DRO method. The project code is available at [github.com/lh218/Fairness-IB.git](https://github.com/lh218/Fairness-IB.git).
List Of Authors: Lei, Haoyu and Gohari, Amin and Farnia, Farzan
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/lh218/Fairness-IB.git
Submission Number: 738
Loading