On the Impact of Performative Risk Minimization for Binary Random Variables

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We provide a binary model for studying the impact of performative risk minimization under linear shifts on the data distribution and model predictions
Abstract: Performativity, the phenomenon where outcomes are influenced by predictions, is particularly prevalent in social contexts where individuals strategically respond to a deployed model. In order to preserve the high accuracy of machine learning models under distribution shifts caused by performativity, Perdomo et al. (2020) introduced the concept of performative risk minimization (PRM). While this framework ensures model accuracy, it overlooks the impact of the PRM on the underlying distributions and the predictions of the model. In this paper, we initiate the analysis of the impact of PRM, by studying performativity for a sequential performative risk minimization problem with binary random variables and linear performative shifts. We formulate two natural measures of impact. In the case of full information, where the distribution dynamics are known, we derive explicit formulas for the PRM solution and our impact measures. In the case of partial information, we provide performative-aware statistical estimators, as well as simulations. Our analysis contrasts PRM to alternatives that do not model data shift and indicates that PRM can have amplified side effects compared to such methods.
Lay Summary: Predictions made by machine learning models often impact their environment (a phenomenon known as performativity); for example, drug efficacy estimates influence the drug's effectiveness due to the placebo effect. Our work theoretically studies how different approaches to model training impact the surrounding environment in the presence of performativity. In particular, we find that model training methods that explicitly account for performativity often lead to a larger shift in the distribution and bias of the decisions, compared to standard training alternatives. We hope that our work will help practitioners to better understand and control potential negative side effects of performative ML training.
Link To Code: https://github.com/insait-institute/performative-prediction-impact-replication
Primary Area: Social Aspects
Keywords: performative prediction, impact, sequential decision making
Submission Number: 6525
Loading