Collaborative Linear Bandits with Adversarial Agents: Near-Optimal Regret BoundsDownload PDF

Published: 31 Oct 2022, Last Modified: 13 Jan 2023NeurIPS 2022 AcceptReaders: Everyone
Keywords: Collaborative Linear Bandits, Adversarial Attacks, Multi-Agent Systems, Robustness, Provably Correct Algorithms
TL;DR: This paper studies a collaborative linear bandit setting in the presence of adversaries, proposes several novel robust algorithms, and provides the first set of tight regret bounds for this problem.
Abstract: We consider a linear stochastic bandit problem involving $M$ agents that can collaborate via a central server to minimize regret. A fraction $\alpha$ of these agents are adversarial and can act arbitrarily, leading to the following tension: while collaboration can potentially reduce regret, it can also disrupt the process of learning due to adversaries. In this work, we provide a fundamental understanding of this tension by designing new algorithms that balance the exploration-exploitation trade-off via carefully constructed robust confidence intervals. We also complement our algorithms with tight analyses. First, we develop a robust collaborative phased elimination algorithm that achieves $\tilde{O}\left(\alpha+ 1/\sqrt{M}\right) \sqrt{dT}$ regret for each good agent; here, $d$ is the model-dimension and $T$ is the horizon. For small $\alpha$, our result thus reveals a clear benefit of collaboration despite adversaries. Using an information-theoretic argument, we then prove a matching lower bound, thereby providing the first set of tight, near-optimal regret bounds for collaborative linear bandits with adversaries. Furthermore, by leveraging recent advances in high-dimensional robust statistics, we significantly extend our algorithmic ideas and results to (i) the generalized linear bandit model that allows for non-linear observation maps; and (ii) the contextual bandit setting that allows for time-varying feature vectors.
Supplementary Material: pdf
15 Replies

Loading