Fairness under Noise Perturbation: from the Perspective of Distribution Shift

10 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: noise-tolerant fairness, distribution shift, fair representation learning
TL;DR: fair representation method addressing both label and sensitive attribute noise
Abstract: Much work on fairness assumes access to clean data during training. In practice, however, due to privacy or legal concern, the collected data can be inaccurate or intentionally perturbed by agents. Under such scenarios, fairness measures on noisy data become a biased estimation of ground-truth discrimination, leading to unfairness for a seemingly fair model during deployment. Current work on noise-tolerant fairness assumes a group-wise universal flip, which can become trivial during training, and requires extra tools for noise rate estimation. In light of existing limitations, in this work, we consider such problem from a novel perspective of distribution shift, where we consider a normalizing flow framework for noise-tolerant fairness without requiring noise rate estimation, which is applicable to both \emph{sensitive attribute noise} and \emph{label noise}. We formulate the noise perturbation as both group- and label-dependent, and we discuss theoretically the connections between fairness measures under noisy and clean data. We prove theoretically the transferability of fairness from noisy to clean data under both types of noise. Experimental results on three datasets show that our method outperforms state-of-the-art alternatives, with better or comparable improvements in group fairness and with relatively small decrease in accuracy under single exposure and the simultaneous presence of two types of noise.
Supplementary Material: zip
Submission Number: 5733
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview