Non-convex online learning via algorithmic equivalenceDownload PDF

Published: 31 Oct 2022, Last Modified: 20 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Abstract: We study an algorithmic equivalence technique between non-convex gradient descent and convex mirror descent. We start by looking at a harder problem of regret minimization in online non-convex optimization. We show that under certain geometric and smoothness conditions, online gradient descent applied to non-convex functions is an approximation of online mirror descent applied to convex functions under reparameterization. In continuous time, the gradient flow with this reparameterization was shown to be \emph{exactly} equivalent to continuous-time mirror descent by Amid and Warmuth, but theory for the analogous discrete time algorithms is left as an open problem. We prove an $O(T^{\frac{2}{3}})$ regret bound for non-convex online gradient descent in this setting, answering this open problem. Our analysis is based on a new and simple algorithmic equivalence method.
TL;DR: Solves the open problem of Amid and Warmuth for reparameterizing mirror descent with gradient descent in discrete time.
Supplementary Material: pdf
17 Replies

Loading