Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness ConstraintDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Non-convex Optimization, Fairness, Supervised Learning
Abstract: Supervised learning models have been increasingly used in various domains such as lending, college admission, natural language processing, face recognition, etc. These models may inherit pre-existing biases from training datasets and exhibit discrimination against protected social groups. Various fairness notions have been introduced to address fairness issues. In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex. In this work, we focus on Equalized Loss ($\textsf{EL}$), a fairness notion that requires the prediction error/loss to be equalized across different demographic groups. Imposing this constraint to the learning process leads to a non-convex optimization problem even if the loss function is convex. We introduce algorithms that can leverage off-the-shelf convex programming tools and efficiently find the $\textit{global}$ optimum of this non-convex problem. In particular, we first propose the $\mathtt{ELminimizer}$ algorithm, which finds the optimal $\textsf{EL}$ fair predictor by reducing the non-convex optimization problem to a sequence of convex constrained optimizations. We then propose a simple algorithm that is computationally more efficient compared to $\mathtt{ELminimizer}$ and finds a sub-optimal $\textsf{EL}$ fair predictor using $\textit{unconstrained}$ convex programming tools. Experiments on real-world data show the effectiveness of our algorithms.
One-sentence Summary: This paper solves a non-convex optimization problem to find a fair predictor under the equalized loss fairness constraint.
Supplementary Material: zip
5 Replies

Loading