Information Directed Sampling for Sparse Linear BanditsDownload PDF

21 May 2021, 20:46 (modified: 31 Jan 2022, 05:10)NeurIPS 2021 SpotlightReaders: Everyone
Keywords: Information-directed sampling, sparse linear bandits, Bayesian regret
TL;DR: We investigate the theoretic and practical applicability of information-directed sampling for sparse linear bandits.
Abstract: Stochastic sparse linear bandits offer a practical model for high-dimensional online decision-making problems and have a rich information-regret structure. In this work we explore the use of information-directed sampling (IDS), which naturally balances the information-regret trade-off. We develop a class of information-theoretic Bayesian regret bounds that nearly match existing lower bounds on a variety of problem instances, demonstrating the adaptivity of IDS. To efficiently implement sparse IDS, we propose an empirical Bayesian approach for sparse posterior sampling using a spike-and-slab Gaussian-Laplace prior. Numerical results demonstrate significant regret reductions by sparse IDS relative to several baselines.
Supplementary Material: pdf
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
11 Replies