Balancing Fairness and Accuracy in Data-Restricted Binary Classification

21 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Fairness, Fairness-Accuracy Tradeoff, Linear Programming, Method of Multipliers, Convex Optimization, Vector Quantization
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We propose a framework for analyzing the tradeoff between accuracy and fairness in different situations that limit the data available to a model.
Abstract: Applications that deal with sensitive information may have restrictions placed on the data available to a machine learning (ML) model. For example, in some applications a model may not have direct access to sensitive attributes. This can affect the ability of an ML model to produce accurate and fair decisions. This paper proposes a framework that models the tradeoff between accuracy and fairness under four practical scenarios that dictate the type of data available for analysis. In contrast to prior work that examines the outputs of a scoring function, our framework directly analyzes the joint distribution of the feature vector, class label, and sensitive attribute by constructing a discrete approximation from a dataset. Through formulating multiple convex optimization problems, we answer the question: How is the accuracy of a Bayesian oracle affected in each situation when constrained to be fair? Analysis is performed on a suite of fairness definitions that include group and individual fairness. Experiments on three datasets demonstrate the utility of the proposed framework as a tool for quantifying the tradeoffs among different fairness notions and their distributional dependencies.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4238
Loading