SenSeI: Sensitive Set Invariance for Enforcing Individual FairnessDownload PDF

Sep 28, 2020 (edited Mar 15, 2021)ICLR 2021 OralReaders: Everyone
  • Keywords: Algorithmic fairness, invariance
  • Abstract: In this paper, we cast fair machine learning as invariant machine learning. We first formulate a version of individual fairness that enforces invariance on certain sensitive sets. We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently. Our theoretical results guarantee the proposed approach trains certifiably fair ML models. Finally, in the experimental studies we demonstrate improved fairness metrics in comparison to several recent fair training procedures on three ML tasks that are susceptible to algorithmic bias.
  • One-sentence Summary: We propose a new invariance-enforcing regularizer for training individually fair ML systems.
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
10 Replies

Loading