Batched Thompson Sampling for Multi-Armed BanditsDownload PDFOpen Website

2021 (modified: 05 Feb 2023)CoRR 2021Readers: Everyone
Abstract: We study Thompson Sampling algorithms for stochastic multi-armed bandits in the batched setting, in which we want to minimize the regret over a sequence of arm pulls using a small number of policy changes (or, batches). We propose two algorithms and demonstrate their effectiveness by experiments on both synthetic and real datasets. We also analyze the proposed algorithms from the theoretical aspect and obtain almost tight regret-batches tradeoffs for the two-arm case.
0 Replies

Loading