Gradient Descent Is Optimal Under Lower Restricted Secant Inequality And Upper Error BoundDownload PDF

Published: 31 Oct 2022, Last Modified: 15 Dec 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: First-Order Optimization, Non-Convex, Deterministic, Gradient Descent, Restricted Secant Inequality, Error Bounds
TL;DR: We show that Gradient Descent is exactly optimal on a class of functions relevant to machine learning using Performance Estimation Problems
Abstract: The study of first-order optimization is sensitive to the assumptions made on the objective functions. These assumptions induce complexity classes which play a key role in worst-case analysis, including the fundamental concept of algorithm optimality. Recent work argues that strong convexity and smoothness—popular assumptions in literature—lead to a pathological definition of the condition number. Motivated by this result, we focus on the class of functions satisfying a lower restricted secant inequality and an upper error bound. On top of being robust to the aforementioned pathological behavior and including some non-convex functions, this pair of conditions displays interesting geometrical properties. In particular, the necessary and sufficient conditions to interpolate a set of points and their gradients within the class can be separated into simple conditions on each sampled gradient. This allows the performance estimation problem (PEP) to be solved analytically, leading to a lower bound on the convergence rate that proves gradient descent to be exactly optimal on this class of functions among all first-order algorithms.
Supplementary Material: pdf
15 Replies

Loading