Byzantine-Resilient Decentralized Multi-Armed Bandits

TMLR Paper2416 Authors

23 Mar 2024 (modified: 14 Jul 2024)Decision pending for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In decentralized cooperative multi-armed bandits (MAB), each agent observes a distinct stream of rewards, and seeks to exchange information with others to select a sequence of arms so as to minimize its regret. Agents in the cooperative setting can outperform a single agent running a MAB method such as Upper-Confidence Bound (UCB) independently. In this work, we study how to recover such salient behavior when an unknown fraction of the agents can be \emph{Byzantine}, that is, communicate arbitrarily wrong information in the form of reward mean-estimates or confidence sets. This framework can be used to model attackers in computer networks, instigators of offensive content into recommender systems, or manipulators of financial markets. Our key contribution is the development of a fully decentralized resilient upper confidence bound (UCB) algorithm that fuses an information mixing step among agents with a truncation of inconsistent and extreme values. This truncation step enables us to establish that the performance of each normal agent is no worse than the classic single-agent UCB1 algorithm in terms of regret, and more importantly, the cumulative regret of all normal agents is strictly better than the non-cooperative case, provided that each agent has at least $3f+1$ neighbors where $f$ is the maximum possible Byzantine agents in each agent's neighborhood. Extensions to time-varying neighbor graphs, and minimax lower bounds are further established on the achievable regret. Experiments corroborate the merits of this framework in practice.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We corrected all the typos and modified some writings according to the reviewer's suggestion. We added more discussions for limitation. We added two new simulations and three new figures. All the revised contents are marked in blue for convenience in spotting the changes made.
Assigned Action Editor: ~Russell_Tsuchida1
Submission Number: 2416
Loading