Optimizing the Performative Risk under Weak Convexity Assumptions

02 Oct 2022, 17:24 (modified: 23 Nov 2022, 20:17)OPT 2022 PosterReaders: Everyone
Keywords: performative prediction, weak convexity
TL;DR: We relax previous convex assumptions and obtain weaker notions of convexity for performative risk.
Abstract: In performative prediction, a predictive model impacts the distribution that generates future data, a phenomenon that is being ignored in classical supervised learning. In this closed-loop setting, the natural measure of performance named performative risk ($\mathrm{PR}$), captures the expected loss incurred by a predictive model \emph{after} deployment. The core difficulty of using the performative risk as an optimization objective is that the data distribution itself depends on the model parameters. This dependence is governed by the environment and not under the control of the learner. As a consequence, even the choice of a convex loss function can result in a highly non-convex $\mathrm{PR}$ minimization problem. Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies the convexity of the performative risk. In this paper, we relax these assumptions and focus on obtaining weaker notions of convexity, without sacrificing the amenability of the $\mathrm{PR}$ minimization problem for iterative optimization methods.
0 Replies