On the Suboptimality of Thompson Sampling in High DimensionsDownload PDF

21 May 2021, 20:45 (modified: 24 Jan 2022, 09:50)NeurIPS 2021 PosterReaders: Everyone
Keywords: Combinatorial Bandits, Thompson sampling
TL;DR: We show through various simple examples that the algorithm of Thompson sampling for Combinatorial bandit is suboptimal in high dimensions by providing lower bounds and numerical experiments.
Abstract: In this paper we consider Thompson Sampling for combinatorial semi-bandits. We demonstrate that, perhaps surprisingly, Thompson Sampling is sub-optimal for this problem in the sense that its regret scales exponentially in the ambient dimension, and its minimax regret scales almost linearly. This phenomenon occurs under a wide variety of assumptions including both non-linear and linear reward functions in the Bernoulli distribution setting. We also show that including a fixed amount of forced exploration to Thompson Sampling does not alleviate the problem. We complement our theoretical results with numerical results and show that in practice Thompson Sampling indeed can perform very poorly in some high dimension situations.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Code: https://github.com/RaymZhang/TS_Combinatorial_Semi_Bandits_Curse
10 Replies