Learning with Interactive Models over Decision-Dependent DistributionsDownload PDF

12 May 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: Classical supervised learning generally trains one model from an i.i.d. data according to an unknown yet fixed distribution. In some real applications such as finance, however, multiple models may be trained by different companies and interacted in a dynamic environment, where the data distribution may take shift according to different models’ decisions. In this work, we study two models for simplicity, and formalize such scenario as a learning problem of two models over decision-dependent distributions. We develop the Repeated Risk Minimization (RRM) for two models, and present a sufficient condition to the existence of stable points for RRM, that is, an equilibrium notion. We further provide the theoretical analysis for the convergence of RRM to stable points based on data distribution and finite training sample, respectively. We also study more practical algorithms, such as gradient descent and stochastic gradient descent, to solve the RRM problem with convergence guarantees and we finally present some empirical studies to validate our theoretical analysis.
0 Replies

Loading