Keywords: healthcare, algorithmic fairness, risk assessment, clinical decision support, classification, interpretability
TL;DR: We introduce a model-agnostic approach for facilitating informed consent in classification, allowing users to opt in and out of providing costly personal information based on performance gains.
Abstract: Machine learning models are often personalized based on information that is protected, sensitive, self-reported, or costly to acquire. These models use information about people, but do not facilitate nor inform their \emph{consent}. Individuals cannot opt out of reporting information that a model needs to personalize their predictions nor tell if they would benefit from personalization in the first place. We introduce a new family of prediction models, called participatory systems, that let individuals opt into personalization at prediction time. We present a model-agnostic algorithm to learn participatory systems for supervised learning tasks where models are personalized with categorical group attributes. We conduct a comprehensive empirical study of participatory systems in clinical prediction tasks, comparing them to common approaches for personalization and imputation. Experimental results demonstrate that participatory systems can facilitate and inform consent in a way that improves performance and privacy across all groups who report personal data.
Submission Number: 24
Loading