Demographically Blind Models Can be Unfair: Fairness through Awareness

ACL ARR 2024 June Submission3065 Authors

15 Jun 2024 (modified: 02 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Debiasing methods for learning systems fall into two distinct philosophies of fairness: removing the use of protected attributes from the model, or including protected attributes in decision-making. However, the source of the bias that we seek to mitigate should dictate our choice of debiasing strategy. We categorize existing debiasing methods in these two fairness families, describe different types of biases, and show in controlled experiments that the choice of debiasing method should depend on the type of bias. Our results yield recommendations for practitioners moving forward.
Paper Type: Short
Research Area: Ethics, Bias, and Fairness
Research Area Keywords: reflections and critiques
Contribution Types: Position papers
Languages Studied: English
Submission Number: 3065
Loading