Adjusting Machine Learning Decisions for Equal Opportunity and Counterfactual Fairness

Published: 07 Jul 2023, Last Modified: 10 Jul 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Machine learning (ML) methods have the potential to automate high-stakes decisions, such as bail admissions or credit lending, by analyzing and learning from historical data. But these algorithmic decisions may be unfair: in learning from historical data, they may replicate discriminatory practices from the past. In this paper, we propose two algorithms that adjust fitted ML predictors to produce decisions that are fair. Our methods provide post-hoc adjustments to the predictors, without requiring that they be retrained. We consider a causal model of the ML decisions, define fairness through counterfactual decisions within the model, and then form algorithmic decisions that capture the historical data as well as possible but are provably fair. In particular, we consider two definitions of fairness. The first is ``equal counterfactual opportunity,'' where the counterfactual distribution of the decision is the same regardless of the protected attribute; the second is counterfactual fairness. We evaluate the algorithms, and the trade-off between accuracy and fairness, on datasets about admissions, income, credit, and recidivism.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: pdf
Changes Since Last Submission: Address all meta reviewer comments: Clarify the optimality of FTU in the discussion; clarify the difference between FTU and ECO early on in the discussion; clarify the choice of the population considered in the expectation (intended as a constraint); change notations per meta reviewer's suggestions; address all other comments
Assigned Action Editor: ~Alexandra_Chouldechova1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 445
Loading