From Optimization Dynamics to Generalization Bounds via Łojasiewicz Gradient Inequality

20 Jul 2022, 03:27 (modified: 09 Oct 2022, 21:12)Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Optimization and generalization are two essential aspects of statistical machine learning. In this paper, we propose a framework to connect optimization with generalization by analyz- ing the generalization error based on the optimization trajectory under the gradient flow algorithm. The key ingredient of this framework is the Uniform-LGI, a property that is generally satisfied when training machine learning models. Leveraging the Uniform-LGI, we first derive convergence rates for gradient flow algorithm, then we give generalization bounds for a large class of machine learning models. We further apply our framework to three distinct machine learning models: linear regression, kernel regression, and two-layer neural networks. Through our approach, we obtain generalization estimates that match or extend previous results.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Camera ready version
Assigned Action Editor: ~Yiming_Ying1
Submission Number: 289