Keywords: latency-aware contextual bandits, average reward maximization, regret analysis, cryo-electron microscopy application
TL;DR: Algorithm design and regret analysis for a novel extension of contextual bandits latency where the learner adaptively selects arms and switches decision sets under action delays.
Abstract: We introduce a latency-aware contextual bandit framework that generalizes the standard contextual bandit problem, where the learner adaptively selects arms and switches decision sets under action delays. In this setting, the learner observes the context and may select multiple arms from a decision set, with the total time determined by the selected subset. The problem can be framed as a special case of semi-Markov decision processes (SMDPs), where contexts and latencies are drawn from an unknown distribution. Leveraging the Bellman optimality equation, we design the contextual online arm filtering (COAF) algorithm, which balances exploration, exploitation, and action latency to minimize regret relative to the optimal average-reward policy. We analyze the algorithm and show that its regret upper bounds match established results in the contextual bandit literature. In numerical experiments on a movie recommendation dataset and cryo-electron microscopy (cryo-EM) data, we demonstrate that our approach efficiently maximizes cumulative reward over time.
Supplementary Material: zip
Primary Area: reinforcement learning
Submission Number: 9663
Loading