Uncertainty quantification for iterative algorithms in linear models with application to early stopping

Published: 27 Apr 2024, Last Modified: 06 May 2024OpenReview Archive Direct UploadEveryoneCC BY 4.0
Abstract: This paper investigates the iterates $\hat{b}^1,\dots,\hat b^T$ obtained from iterative algorithms in high-dimensional linear regression problems, in the regime where the feature dimension p is comparable with the sample size n, i.e., $p \asymp n$. The analysis and proposed estimators are applicable to Gradient Descent (GD), proximal GD and their accelerated variants such as Fast Iterative Soft-Thresholding (FISTA). The paper proposes novel estimators for the generalization error of the iterate $\hat b^t$ for any fixed iteration t along the trajectory. These estimators are proved to be $\sqrt{n}$-consistent under Gaussian designs. Applications to early-stopping are provided: when the generalization error of the iterates is a U-shape function of the iteration t, the estimates allow to select from the data an iteration $\hat t$ that achieves the smallest generalization error along the trajectory. Additionally, we provide a technique for developing debiasing corrections and valid confidence intervals for the components of the true coefficient vector from the iterate $\hat b^t$ at any finite iteration $t$. Extensive simulations on synthetic data illustrate the theoretical results.
Loading