Overparameterization Improves Robustness to Covariate Shift in High DimensionsDownload PDF

21 May 2021, 20:44 (edited 14 Jan 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: Deep Learning Theory, Distribution Shift, Random Matrices, High-Dimensional Statistics
  • TL;DR: We provide and analyze an exactly solvable model of random feature regression (with covariate shift) that reproduces existing empirical phenomena related to distribution shift.
  • Abstract: A significant obstacle in the development of robust machine learning models is \emph{covariate shift}, a form of distribution shift that occurs when the input distributions of the training and test sets differ while the conditional label distributions remain the same. Despite the prevalence of covariate shift in real-world applications, a theoretical understanding in the context of modern machine learning has remained lacking. In this work, we examine the exact high-dimensional asymptotics of random feature regression under covariate shift and present a precise characterization of the limiting test error, bias, and variance in this setting. Our results motivate a natural partial order over covariate shifts that provides a sufficient condition for determining when the shift will harm (or even help) test performance. We find that overparameterized models exhibit enhanced robustness to covariate shift, providing one of the first theoretical explanations for this ubiquitous empirical phenomenon. Additionally, our analysis reveals an exact linear relationship between the in-distribution and out-of-distribution generalization performance, offering an explanation for this surprising recent observation.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
14 Replies