TL;DR: This work proposes an algorithm for linear bandits with latent features, achieving sublinear regret by augmenting orthogonal basis vectors and using a doubly-robust reward estimator, without requiring prior knowledge of the unobserved feature space.
Abstract: We study the linear bandit problem that accounts for partially observable features. Without proper handling, unobserved features can lead to linear regret in the decision horizon $T$, as their influence on rewards is unknown.
To tackle this challenge, we propose a novel theoretical framework and an algorithm with sublinear regret guarantees.
The core of our algorithm consists of: (i) feature augmentation, by appending basis vectors that are orthogonal to the row space of the observed features; and (ii) the introduction of a doubly robust estimator.
Our approach achieves a regret bound of $\tilde{O}(\sqrt{(d + d_h)T})$, where $d$ denotes the dimension of the observed features, and $d_h$ represents the number of nonzero coefficients in the parameter associated with the reward component projected onto the subspace orthogonal to the row space spanned by the observed features.
Notably, our algorithm requires no prior knowledge of the unobserved feature space, which may expand as more features become hidden.
Numerical experiments confirm that our algorithm outperforms both non-contextual multi-armed bandits and linear bandit algorithms depending solely on observed features.
Lay Summary: In many decision-making systems—like recommending advertisements—outcomes depend not only on observable information but also on hidden factors. For example, in advertising, unobserved factors such as emotional appeal or creative design can significantly influence users' click-through rates. Most existing methods assume decisions are based solely on visible data, but ignoring hidden factors can lead to poor outcomes.
Our research aims to address this limitation. We develop a method that makes effective and efficient decision-making even when information is unobservable. Rather than attempting to recover the hidden factors directly, our approach leverages observed data in a way that indirectly accounts for the unobserved parts. Additionally, we incorporate a technique that reduces errors caused by missing information.
This work matters because it aligns more closely with real-world decision-making, where not everything is observable. Our method does not rely on assumptions about the nature of hidden factors, thus broadly applicable. We demonstrate that our approach consistently outperforms conventional methods across various settings, both theoretically and empirically, enabling systems to make more accurate decisions even without access to the full picture.
Primary Area: Theory->Online Learning and Bandits
Keywords: Linear Bandits, Partially Observable Features, Doubly Robust
Submission Number: 8664
Loading