Differentially Private Empirical Risk Minimization under the Fairness LensDownload PDF

21 May 2021, 20:45 (edited 26 Oct 2021)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Differential Privacy, Empirical Risk Minimization, Fairness
  • TL;DR: This paper sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization
  • Abstract: Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems. It allows to measure and bound the risk associated with an individual participation in a computation. However, it was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals. This paper builds on these important observations and sheds light on the causes of the disparate impacts arising in the problem of differentially private empirical risk minimization. It focuses on the accuracy disparity arising among groups of individuals in two well-studied DP learning methods: output perturbation and differentially private stochastic gradient descent. The paper analyzes which data and model properties are responsible for the disproportionate impacts, why these aspects are affecting different groups disproportionately, and proposes guidelines to mitigate these effects. The proposed approach is evaluated on several datasets and settings.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: zip
11 Replies

Loading