Training individually fair ML models with sensitive subspace robustness

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Keywords: fairness, adversarial robustness
  • TL;DR: Algorithm for training individually fair classifier using adversarial robustness
  • Abstract: We consider training machine learning models that are fair in the sense that their performance is invariant under certain sensitive perturbations to the inputs. For example, the performance of a resume screening system should be invariant under changes to the gender and/or ethnicity of the applicant. We formalize this notion of algorithmic fairness as a variant of individual fairness and develop a distributionally robust optimization approach to enforce it during training. We also demonstrate the effectiveness of the approach on two ML tasks that are susceptible to gender and racial biases.
  • Code: https://github.com/IBM/sensitive-subspace-robustness
  • Original Pdf:  pdf
0 Replies

Loading