Optimal Regularization for Performative Learning

ICLR 2026 Conference Submission7451 Authors

16 Sept 2025 (modified: 26 Nov 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Performative Prediction, High-dimensional regression, Ridge regularization
TL;DR: We characterize the impact of regularization for performative learning in high-dimensional regression, showing that performative effects can improve performance upon setting the regularizer optimally.
Abstract: In performative learning, the data distribution reacts to the deployed model—for example, because strategic users adapt their features to game it—which creates a more complex dynamic than in classical supervised learning. One should thus not only optimize the model for the current data but also take into account that the model might steer the distribution in a new direction, without knowing the exact nature of the potential shift. We explore how regularization can help cope with performative effects by studying its impact in high-dimensional ridge regression. We show that, while performative effects worsen the test risk in the population setting, they can be beneficial in the over-parameterized regime where the number of features exceeds the number of samples. We show that the optimal regularization scales with the overall strength of the performative effect, making it possible to set the regularization in anticipation of this effect. We illustrate this finding through empirical evaluations of the optimal regularization parameter on both synthetic and real-world datasets.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 7451
Loading