Uniform Convergence with Square-Root Lipschitz Loss

Published: 21 Sept 2023, Last Modified: 30 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Uniform Convergence, Square-Root Lipschitz, Benign Overfitting, Minimal Norm Interpolation, Phase Retrieval, ReLU Regression, Matrix Sensing
Abstract: We establish generic uniform convergence guarantees for Gaussian data in terms of the Radamacher complexity of the hypothesis class and the Lipschitz constant of the square root of the scalar loss function. We show how these guarantees substantially generalize previous results based on smoothness (Lipschitz constant of the derivative), and allow us to handle the broader class of square-root-Lipschtz losses, which includes also non-smooth loss functions appropriate for studying phase retrieval and ReLU regression, as well as rederive and better understand “optimistic rate” and interpolation learning guarantees.
Supplementary Material: pdf
Submission Number: 10458
Loading