When Personalization Harms: Reconsidering the Use of Group Attributes of PredictionDownload PDF

Published: 21 Nov 2022, Last Modified: 08 Sept 2024TSRML2022Readers: Everyone
Keywords: Algorithmic Fairness, Bias, Auditing, Healthcare
Abstract: Machine learning models often use group attributes to assign personalized predictions. In this work, we show that models that use group attributes can assign unnecessarily inaccurate predictions to specific groups -- i.e., that training a model with group attributes can reduce performance for specific groups. We propose formal conditions to ensure the ``fair use" of group attributes in prediction models -- i.e., collective preference guarantees that can be checked by training one additional model. We characterize how machine learning models can exhibit fair use due to standard practices in specification, training, and deployment. We study the prevalence of fair use violations in clinical prediction models. Our results highlight the inability to resolve fair use violations, underscore the need to measure the gains of personalization for all groups who provide personal data and illustrate actionable interventions to mitigate harm.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/when-personalization-harms-reconsidering-the/code)
3 Replies

Loading