Multiplayer Information Asymmetric Contextual Bandits

TMLR Paper3639 Authors

07 Nov 2024 (modified: 12 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Single-player contextual bandits are a well-studied problem in reinforcement learning that has seen applications in various fields such as advertising, healthcare, and finance. In light of the recent work on information asymmetric bandits, we propose a novel multiplayer information asymmetric contextual bandit framework where there are multiple players each with their own set of actions. At every round, they observe the same context vectors and simultaneously take an action from their own set of actions, giving rise to a joint action. However, upon taking this action the players are subjected to information asymmetry in (1) actions and/or (2) rewards. We designed an algorithm mLinUCB by modifying the classical single-player algorithm LinUCB in \cite{chu2011contextual} to achieve the optimal regret $O(\sqrt{T})$ when only one kind of asymmetry is present. We then propose a novel algorithm ETC that is built on explore-then-commit principles to achieve the same optimal regret when both types of asymmetry are present.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Chicheng_Zhang1
Submission Number: 3639
Loading