Parameter-free Regret in High Probability with Heavy TailsDownload PDF

Published: 31 Oct 2022, Last Modified: 10 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: Online learning, Parameter-free, Online Convex Optimization, Heavy tails, Regularization
Abstract: We present new algorithms for online convex optimization over unbounded domains that obtain parameter-free regret in high-probability given access only to potentially heavy-tailed subgradient estimates. Previous work in unbounded domains con- siders only in-expectation results for sub-exponential subgradients. Unlike in the bounded domain case, we cannot rely on straight-forward martingale concentration due to exponentially large iterates produced by the algorithm. We develop new regularization techniques to overcome these problems. Overall, with probability at most δ, for all comparators u our algorithm achieves regret O ̃(∥u∥T 1/p log(1/δ)) for subgradients with bounded pth moments for some p ∈ (1, 2].
TL;DR: We produce parameter-free online learning algorithms whose regret bound holds in high probability even for heavy tailed subgradient estimates.
Supplementary Material: pdf
10 Replies

Loading