[Re] Fair Selective Classification Via SufficiencyDownload PDF

Anonymous

05 Feb 2022 (modified: 05 May 2023)ML Reproducibility Challenge 2021 Fall Blind SubmissionReaders: Everyone
Keywords: selective classification, fairness, machine learning, deep learning, conditional mutual information, pytorch
TL;DR: We attempted to reproduce the improvement in fair selective classification using a novel upper bound in condidtional mutual information.
Abstract: Bu, Lee et al. (2021) introduced a method for enforcing fairness in selective classification, deriving a novel upper bound for the conditional mutual information from the sufficiency criterion. We attempt to verify the second claim that: "[this novel upper bound] can be used as a regularizer to enforce the sufficiency criteria, [and] then show that it works to mitigate the disparities on real-world datasets." To verify the author's claim, we implemented the model and regularizer described in the original paper. We train both a baseline and regularized model on three of the four datasets used by the authors: Adult, CelebA, and CheXpert. We found that we could not reproduce the original paper's results. While the area between precision curves decreases somewhat for the CelebA and CheXpert datasets, it increases for the Adult experiment. Also, the analysis of our margin distributions between the baseline and regularized models do not seem to indicate an increase in overlap between groups. Due to these results, we cannot conclude the effectiveness of the regularizer in reducing the disparity between two groups demonstrated by the authors of Fair Selective Classification via Sufficiency, using our implementation.
Paper Url: http://proceedings.mlr.press/v139/lee21b.html
Paper Venue: ICML 2021
4 Replies

Loading