Towards Best-of-All-Worlds Online Learning with Feedback GraphsDownload PDF

Published: 09 Nov 2021, Last Modified: 05 May 2023NeurIPS 2021 PosterReaders: Everyone
Keywords: Online Learning, Multi-armed Bandit, Feedback Graphs, Adversarial Corruptions
Abstract: We study the online learning with feedback graphs framework introduced by Mannor and Shamir (2011), in which the feedback received by the online learner is specified by a graph $G$ over the available actions. We develop an algorithm that simultaneously achieves regret bounds of the form: $O(\sqrt{\theta(G) T})$ with adversarial losses; $O(\theta(G)\mathrm{polylog}{T})$ with stochastic losses; and $O(\theta(G)\mathrm{polylog}{T} + \sqrt{\theta(G) C})$ with stochastic losses subject to $C$ adversarial corruptions. Here, $\theta(G)$ is the $clique~covering~number$ of the graph $G$. Our algorithm is an instantiation of Follow-the-Regularized-Leader with a novel regularization that can be seen as a product of a Tsallis entropy component (inspired by Zimmert and Seldin (2019)) and a Shannon entropy component (analyzed in the corrupted stochastic case by Amir et al. (2020)), thus subtly interpolating between the two forms of entropies. One of our key technical contributions is in establishing the convexity of this regularizer and controlling its inverse Hessian, despite its complex product structure.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
TL;DR: We provide the first best-of-all-worlds regret guarantee for online learning with graph-structured feedback via a novel Tsallis-Shannon regularization.
Supplementary Material: pdf
15 Replies

Loading