From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality

Published: 09 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Optimization and generalization are two essential aspects of statistical machine learning. In this paper, we propose a framework to connect optimization with generalization by analyz- ing the generalization error based on the optimization trajectory under the gradient flow algorithm. The key ingredient of this framework is the Uniform-LGI, a property that is generally satisfied when training machine learning models. Leveraging the Uniform-LGI, we first derive convergence rates for gradient flow algorithm, then we give generalization bounds for a large class of machine learning models. We further apply our framework to three distinct machine learning models: linear regression, kernel regression, and two-layer neural networks. Through our approach, we obtain generalization estimates that match or extend previous results.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version
Code: https://github.com/mathematicallfs/Uniform-LGI
Assigned Action Editor: ~Yiming_Ying1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 289
Loading