## A Geometric Structure of Acceleration and Its Role in Making Gradients Small Fast

21 May 2021, 20:42 (modified: 26 Oct 2021, 04:12)NeurIPS 2021 PosterReaders: Everyone
Keywords: acceleration, convex optimization, Euclidean geometry, gradient norm, small gradients, making gradients small, composite optimization, OGM, FISTA, OGM-G, potential function-based, Lyapunov analysis, complexity bounds
TL;DR: We find a geometric structure of acceleration and use it to obtain a method for making gradients small at rate $\mathcal{O}(1/K^4)$ in the prox-grad setup.
Abstract: Since Nesterov's seminal 1983 work, many accelerated first-order optimization methods have been proposed, but their analyses lacks a common unifying structure. In this work, we identify a geometric structure satisfied by a wide range of first-order accelerated methods. Using this geometric insight, we present several novel generalizations of accelerated methods. Most interesting among them is a method that reduces the squared gradient norm with $\mathcal{O}(1/K^4)$ rate in the prox-grad setup, faster than the $\mathcal{O}(1/K^3)$ rates of Nesterov's FGM or Kim and Fessler's FPGM-m.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
9 Replies