When Fair Ranking Meets Uncertain InferenceDownload PDFOpen Website

16 May 2021 (modified: 05 May 2023)ACM SIGIR Badging SubmissionReaders: Everyone
Abstract: Existing fair ranking systems, especially those designed to be demographically fair, assume that accurate demographic information about individuals is available to the ranking algorithm. In practice, however, this assumption may not hold -- in real-world contexts like ranking job applicants or credit seekers, social and legal barriers may prevent algorithm operators from collecting peoples' demographic information. In these cases, algorithm operators may attempt to infer peoples' demographics and then supply these inferences as inputs to the ranking algorithm. In this study, we investigate how uncertainty and errors in demographic inference impact the fairness offered by fair ranking algorithms. Using simulations and three case studies with real datasets, we show how demographic inferences drawn from real systems can lead to unfair rankings. Our results suggest that developers should not use inferred demographic data as input to fair ranking algorithms, unless the inferences are extremely accurate.
Artifact Type Made Available By Authors: Code, Dataset
Requested Badges: Artifacts Evaluated – Functional, Artifacts Evaluated – Reusable and Available, Results Reproduced
Venue Accepted: ACM SIGIR
5 Replies

Loading