Non-Asymptotic Generalization and Optimization Bounds for Stochastic Gauss-Newton in Deep Neural Networks
TL;DR: We establish novel non-asymptotic convergence and algorithm-dependent generalization bounds for the stochastic Gauss-Newton in deep learning.
Abstract: An important question in deep learning is how higher-order optimization methods affect generalization. In this work, we analyze a stochastic Gauss-Newton (SGN) method with Levenberg-Marquardt damping and mini-batch sampling for training overparameterized deep neural networks with smooth activations in a regression setting. Our theoretical contributions are twofold. First, we establish finite-time optimization bounds via a variable-metric analysis in parameter space, with explicit dependencies on the batch size, network width and depth. Second, we derive non-asymptotic generalization bounds for SGN using algorithmic stability in the overparameterized regime, characterizing the impact of curvature, batch size, and overparameterization on generalization performance. Our theoretical results identify a favorable generalization regime for SGN in which a larger minimum eigenvalue of the Gauss-Newton matrix along the optimization path, together with smaller batch sizes, yields tighter stability bounds.
Submission Number: 661
Loading