Linearizing Contextual Bandits with Latent State DynamicsDownload PDF

Published: 20 May 2022, Last Modified: 05 May 2023UAI 2022 PosterReaders: Everyone
Keywords: contextual multi-armed bandit, latent bandit, non-stationary bandit, linear bandit, hidden Markov model, Thompson sampling, upper confidence bounds, sequential Bayesian inference
TL;DR: We use knowledge of latent structure and approximate Bayesian inference to extend linear bandit methods to contextual bandit problems with an evolving hidden state.
Abstract: In many real-world applications of multi-armed bandit problems, both rewards and contexts are often influenced by confounding latent variables which evolve stochastically over time. While the observed contexts and rewards are nonlinearly related, we show that prior knowledge of latent causal structure can be used to reduce the problem to the linear bandit setting. We develop two algorithms, Latent Linear Thompson Sampling (L2TS) and Latent Linear UCB (L2UCB), which use online EM algorithms for hidden Markov models to learn the latent transition model and maintain a posterior belief over the latent state, and then use the resulting posteriors as context features in a linear bandit problem. We upper bound the error in reward estimation in the presence of a dynamical latent state, and derive a novel problem-dependent regret bound for linear Thompson sampling with non-stationarity and unconstrained reward distributions, which we apply to L2TS under certain conditions. Finally, we demonstrate the superiority of our algorithms over related bandit algorithms through experiments.
Supplementary Material: zip
4 Replies

Loading