Fair Wrapping for Black-box PredictionsDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Oct 2022, 02:33NeurIPS 2022 AcceptReaders: Everyone
Keywords: Fairness, post-processing, loss functions, boosting
TL;DR: We present a framework for reducing the bias of black-box classifiers by interpreting unfairness as a twist to be correcting through the improper alpha-loss.
Abstract: We introduce a new family of techniques to post-process (``wrap") a black-box classifier in order to reduce its bias. Our technique builds on the recent analysis of improper loss functions whose optimization can correct any twist in prediction, unfairness being treated as a twist. In the post-processing, we learn a wrapper function which we define as an $\alpha$-tree, which modifies the prediction. We provide two generic boosting algorithms to learn $\alpha$-trees. We show that our modification has appealing properties in terms of composition of $\alpha$-trees, generalization, interpretability, and KL divergence between modified and original predictions. We exemplify the use of our technique in three fairness notions: conditional value-at-risk, equality of opportunity, and statistical parity; and provide experiments on several readily available datasets.
Supplementary Material: pdf
12 Replies

Loading