TL;DR: We study how to use offline data to accelerate online learning in latent bandits. We first establish the generality of latent bandits and then focus on linear latent bandits, presenting algorithms with matching upper and lower bounds and experiments.
Abstract: Leveraging offline data is an attractive way to accelerate online sequential decision-making. However, it is crucial to account for latent states in users or environments in the offline data, and latent bandits form a compelling model for doing so. In this light, we design end-to-end latent bandit algorithms capable of handing uncountably many latent states. We focus on a linear latent contextual bandit — a linear bandit where each user has its own high-dimensional reward parameter in $\mathbb{R}^{d_A}$, but reward parameters across users lie in a low-rank latent subspace of dimension $d_K \ll d_A$. First, we provide an offline algorithm to learn this subspace with provable guarantees. We then present two online algorithms that utilize the output of this offline algorithm to accelerate online learning. The first enjoys $\tilde O(\min(d_A\sqrt{T}, d_K\sqrt{T}(1+\sqrt{d_AT/d_KN})))$ regret guarantees, so that the effective dimension is lower when the size $N$ of the offline dataset is larger. We prove a matching lower bound on regret, showing that our algorithm is minimax optimal. The second is a practical algorithm that enjoys only a slightly weaker guarantee, but is computationally efficient. We also establish the efficacy of our methods using experiments on both synthetic data and real-life movie recommendation data from MovieLens. Finally, we theoretically establish the generality of the latent bandit model by proving a de Finetti theorem for stateless decision processes.
Lay Summary: Many real-world systems—like recommendation engines or clinical decision aids—learn better when they can combine past data with new interactions. But when past data comes from a mix of different types of users or conditions, this can confuse standard learning methods. Our work addresses this by designing algorithms that can handle these hidden differences. Specifically, we focus on settings where each user behaves differently, but these differences lie in a shared low-dimensional structure. First, we show how to use existing pre-collected data to uncover this shared structure, even when there are infinitely many user types. Then, we introduce two new learning algorithms that use this knowledge to improve decision-making with new users. One algorithm is provably optimal, and the other runs faster and is more practical. We test these methods on synthetic data and real movie recommendation data and show strong improvements. Finally, we prove that this framework of hidden differences, or latent structure, captures a larger and more general notion of reasonable models of decision-making without memory -- highlighting its generality for future applications.
Link To Code: https://github.com/hetankevin/probono
Primary Area: Theory->Online Learning and Bandits
Keywords: bandits, latent bandits, hybrid RL, online learning with offline datasets, dimensionality reduction, linear bandits
Submission Number: 8183
Loading