Theoretical Analysis of Leave-one-out Cross Validation for Non-differentiable Penalties under High-dimensional Settings

Published: 22 Jan 2025, Last Modified: 06 Mar 2025AISTATS 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Proved 1/n convergence of LO under proportional high dim regime; proved parameter tuning using LO is also consistent.
Abstract: Despite a large and significant body of recent work focusing on the hyperparameter tuning of regularized models in the high dimensional regime, a theoretical understanding of this problem for non-differentiable penalties such as generalized LASSO and nuclear norm is missing. In this paper we resolve this challenge. We study the hyperparameter tuning problem in the proportional high dimensional regime where both the sample size $n$ and number of features $p$ are large, and $n/p$ and the signal-to-noise ratio (per observation) remain finite. To achieve this goal, we first provide finite-sample upper bounds on the expected squared error of leave-one-out cross-validation (LO) in estimating the out-of-sample risk. Building on this result, we establish the consistency of the hyperparameter tuning method that is based on minimizing LO's estimate. Our simulation results confirm the accuracy and sharpness of our theoretical results.
Submission Number: 1520
Loading