A New Perspective on Least-Norm Interpolation Under Gaussian Covariates

Published: 03 Feb 2026, Last Modified: 03 Feb 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Least-Norm Interpolators (LNI) in overparameterized linear models have gained attention as a tractable framework for studying interpolation phenomena that resemble empirical observations in neural networks. Most prior work on these interpolators exploits closed-form solutions when available or heavily relies on Gaussian comparison results, such as the convex Gaussian Min-Max Theorem (CGMT). In this paper, we introduce a new perspective on LNI under Gaussian covariates by leveraging tools from high-dimensional geometry. First, we obtain a new variational formula for the bias of any LNI under isotropic Gaussian covariates when the norm is in Milman's $M$-position. Next, we prove the sharp rates on $\ell_1$-LNI that were obtained by Wang et al. 22', using techniques from Gaussian polytopes and super-concentration. Crucially, our approach does not rely on CGMT.
Submission Number: 1989
Loading