Certifying Some Distributional Fairness with Subpopulation DecompositionDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 13 Oct 2022, 01:38NeurIPS 2022 AcceptReaders: Everyone
Keywords: Certifying Fairness, fairness constrained distribution, distribution shifts
TL;DR: We propose a general framework to certifying the distributional fairness of a trained model based on fairness constrained distribution.
Abstract: Extensive efforts have been made to understand and improve the fairness of machine learning models based on observational metrics, especially in high-stakes domains such as medical insurance, education, and hiring decisions. However, there is a lack of certified fairness considering the end-to-end performance of an ML model. In this paper, we first formulate the certified fairness of an ML model trained on a given data distribution as an optimization problem based on the model performance loss bound on a fairness constrained distribution, which is within bounded distributional distance with the training distribution. We then propose a general fairness certification framework and instantiate it for both sensitive shifting and general shifting scenarios. In particular, we propose to solve the optimization problem by decomposing the original data distribution into analytical subpopulations and proving the convexity of the subproblems to solve them. We evaluate our certified fairness on six real-world datasets and show that our certification is tight in the sensitive shifting scenario and provides non-trivial certification under general shifting. Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios. We also compare our certified fairness bound with adapted existing distributional robustness bounds on Gaussian data and demonstrate that our method is significantly tighter.
Supplementary Material: zip
18 Replies

Loading