SenSeI: Sensitive Set Invariance for Enforcing Individual FairnessDownload PDF

28 Sept 2020, 15:50 (modified: 15 Mar 2021, 23:30)ICLR 2021 OralReaders: Everyone
Keywords: Algorithmic fairness, invariance
Abstract: In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
One-sentence Summary: We propose a new invariance-enforcing regularizer for training individually fair ML systems.
Supplementary Material: zip
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
10 Replies