Finite-Time Logarithmic Bayes Regret Upper Bounds

Published: 21 Sept 2023, Last Modified: 08 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Bayesian bandits, logarithmic regret bounds, multi-armed bandits, linear bandits
TL;DR: We derive the first finite-time logarithmic Bayes regret upper bounds for Bayesian bandits
Abstract: We derive the first finite-time logarithmic Bayes regret upper bounds for Bayesian bandits. In a multi-armed bandit, we obtain $O(c_\Delta \log n)$ and $O(c_h \log^2 n)$ upper bounds for an upper confidence bound algorithm, where $c_h$ and $c_\Delta$ are constants depending on the prior distribution and the gaps of bandit instances sampled from it, respectively. The latter bound asymptotically matches the lower bound of Lai (1987). Our proofs are a major technical departure from prior works, while being simple and general. To show the generality of our techniques, we apply them to linear bandits. Our results provide insights on the value of prior in the Bayesian setting, both in the objective and as a side information given to the learner. They significantly improve upon existing $\tilde{O}(\sqrt{n})$ bounds, which have become standard in the literature despite the logarithmic lower bound of Lai (1987).
Supplementary Material: zip
Submission Number: 5400
Loading