Keywords: Linear Bandits, Online Learning with delay, Mixing Processes
Abstract: We study the linear stochastic bandit problem, relaxing the standard i.i.d.~assumption on the observation noise.
As an alternative to this restrictive assumption, we allow the noise terms across rounds to be sub-Gaussian but
interdependent, with dependencies that decay over time. To address this setting, we develop new confidence sequences
using a recently introduced reduction scheme to sequential probability assignment, and use these to derive a bandit
algorithm based on the principle of optimism in the face of uncertainty. We provide regret bounds for the
resulting algorithm, expressed in terms of the decay rate of the strength of dependence between observations. Among
other results, we show that our bounds recover the standard rates up to a factor of the mixing time for geometrically
mixing observation noise.
Confirmation: I understand that authors of each paper submitted to EWRL may be asked to review 2-3 other submissions to EWRL.
Serve As Reviewer: ~Baptiste_Abélès1, ~Gergely_Neu1
Track: Regular Track: unpublished work
Submission Number: 136
Loading