The Lazy Online Subgradient Algorithm is Universal on Strongly Convex DomainsDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: optimisation, online learning, FTRL, subgradient, gradient descent, sequential, machine learning, strongly convex, logarithmic, pseudoregret, expected regret, differential geometry, curvature
TL;DR: If the domain is strongly convex, Online Lazy Gradient Descent gets $O(\log N)$ regret against iid opponents and $O(\sqrt N)$ against all opponents.
Abstract: We study Online Lazy Gradient Descent for optimisation on a strongly convex domain. The algorithm is known to achieve $O(\sqrt N)$ regret against adversarial opponents; here we show it is universal in the sense that it also achieves $O(\log N)$ expected regret against i.i.d opponents. This improves upon the more complex meta-algorithm of Huang et al \cite{FTLBall} that only gets $O(\sqrt {N \log N})$ and $ O(\log N)$ bounds. In addition we show that, unlike for the simplex, order bounds for pseudo-regret and expected regret are equivalent for strongly convex domains.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: zip
9 Replies

Loading