Sparse Linear Bandits with Fixed Sparsity Support: Adversarial and Stochastic Regimes

Published: 03 Feb 2026, Last Modified: 24 Apr 2026AISTATS 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose algorithms to solve sparse linear bandit problems in both adversarial and stochastic regimes, under the assumption of fixed sparsity support.
Abstract: We study sparse linear bandits in both adversarial and stochastic settings. While existing literature has extensively explored sparse linear bandits in the stochastic regime, the adversarial setting, particularly for general $l_p$-ball action sets $(p>1)$, remains poorly understood. Our work addresses this gap by showing that the curse of dimensionality in adversarial linear bandits can be broken under a natural fixed sparsity support assumption. Specifically, we design algorithms for the $l_\infty$- and $l_2$-balls that integrate sparsity support identification with the OSMD algorithm, achieving regret bounds $O(s\sqrt{T}\log T )$ and $O(\sqrt{sT}\log T )$, respectively. These results nearly match the optimal results when the sparsity support is known, and significantly improve upon the $ O(d\sqrt{T}) $ regret of algorithms ignoring sparsity. Furthermore, in the stochastic setting, we show how the geometry of the $l_p$-ball action set influences both exploration and regret. Our work highlights fundamental contrasts between adversarial and stochastic regimes, and establishes the first regret guarantees for sparse adversarial linear bandits beyond the $l_1$-ball action set.
Submission Number: 637
Loading