On the Epistemic Limits of Personalized PredictionDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 15 Oct 2022, 19:21NeurIPS 2022 AcceptReaders: Everyone
Keywords: Fairness, Accountability, and Transparency, Information Theory, Predictive Models
TL;DR: This paper aims to understand the conditions under which one can detect fair use violations in predictive models and, more interestingly, the conditions where estimating fair use is impossible.
Abstract: Machine learning models are often personalized by using group attributes that encode personal characteristics (e.g., sex, age group, HIV status). In such settings, individuals expect to receive more accurate predictions in return for disclosing group attributes to the personalized model. We study when we can tell that a personalized model upholds this principle for every group who provides personal data. We introduce a metric called the benefit of personalization (BoP) to measure the smallest gain in accuracy that any group expects to receive from a personalized model. We describe how the BoP can be used to carry out basic routines to audit a personalized model, including: (i) hypothesis tests to check that a personalized model improves performance for every group; (ii) estimation procedures to bound the minimum gain in personalization. We characterize the reliability of these routines in a finite-sample regime and present minimax bounds on both the probability of error for BoP hypothesis tests and the mean-squared error of BoP estimates. Our results show that we can only claim that personalization improves performance for each group who provides data when we explicitly limit the number of group attributes used by a personalized model. In particular, we show that it is impossible to reliably verify that a personalized classifier with $k \geq 19$ binary group attributes will benefit every group who provides personal data using a dataset of $n = 8\times10^9$ samples -- one for each person in the world.
Supplementary Material: pdf
21 Replies