Keywords: online learning, bandit
TL;DR: This paper studys the stochastic multi-armed bandit problem with graph feedback where two arms are connected if they are similar, presenting regret bounds, and validating results through experiments.
Abstract: In this paper, we study the stochastic multi-armed bandit problem with graph feedback. Motivated by the clinical trials and recommendation problem, we assume that two arms are connected if and only if they are similar (i.e., their means are close enough). We establish a regret lower bound for this novel feedback structure and introduce two UCB-based algorithms: D-UCB with problem-independent regret upper bounds and C-UCB with problem-dependent upper bounds. Leveraging the similarity structure, we also consider the scenario where the number of arms increases over time. Practical applications related to this scenario include Q\&A platforms (Reddit, Stack Overflow, Quora) and product reviews in Amazon and Flipkart. Answers (product reviews) continually appear on the website, and the goal is to display the best answers (product reviews) at the top. When the means of arms are independently generated from some distribution, we provide regret upper bounds for both algorithms and discuss the sub-linearity of bounds in relation to the distribution of means. Finally, we conduct experiments to validate the theoretical results.
Supplementary Material: zip
List Of Authors: Han, Qi and Guo, Fei and Li, Zhu
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/qh1874/GraphBandits_SimilarArms
Submission Number: 25
Loading