Optimal Exploitation of Clustering and History Information in Multi-armed Bandit Problem

Djallel Bouneffouf, Srinivasan Parthasarathy, Horst Samulowitz, Martin Wistuba

Apr 23, 2019 Submission readers: everyone
  • Keywords: Multi-armed bandit, Contextual bandit
  • Abstract: We consider the stochastic multi-armed bandit problem and the contextual bandit problem with historical observations and pre-clustered arms. The historical observations can contain any number of instances for each arm, and the pre-clustering information is a fixed clustering of arms provided as part of the input. We develop a variety of algorithms which incorporate this offline information effectively during the online exploration phase and derive their regret bounds. In particular, we develop the META algorithm which effectively hedges between two other algorithms: one which uses both historical observations and clustering, and another which uses only the historical observations. The former outperforms the latter when the clustering quality is good, and vice-versa. Extensive experiments on synthetic and real world datasets on Warafin drug dosage and web server selection for latency minimization validate our theoretical insights and demonstrate that META is a robust strategy for optimally exploiting the pre-clustering information.
0 Replies