Keywords: Linear Bandits, Matrix Sketching, Regret Analysis
TL;DR: We propose a framework for efficient sketch-based linear bandits to address the issue of linear regret that may arise with matrix sketching.
Abstract: The utilization of sketching techniques has progressively emerged as a pivotal method for enhancing the efficiency of online learning.
In linear bandit settings, current sketch-based approaches leverage matrix sketching to reduce the per-round time complexity from $\Omega\left(d^2\right)$ to $O(d)$, where $d$ is the input dimension. Despite this improved efficiency, these approaches encounter critical pitfalls: if the spectral tail of the covariance matrix does not decrease rapidly, it can lead to linear regret.
In this paper, we revisit the regret analysis and algorithm design concerning approximating the covariance matrix using matrix sketching in linear bandits.
We illustrate how inappropriate sketch sizes can result in unbounded spectral loss, thereby causing linear regret.
To prevent this issue, we propose Dyadic Block Sketching, an innovative streaming matrix sketching approach that adaptively manages sketch size to constrain global spectral loss.
This approach effectively tracks the best rank-$k$ approximation in an online manner, ensuring efficiency when the geometry of the covariance matrix is favorable.
Then, we apply the proposed Dyadic Block Sketching to linear bandits and demonstrate that the resulting bandit algorithm can achieve sublinear regret without prior knowledge of the covariance matrix, even under the worst case.
Our method is a general framework for efficient sketch-based linear bandits, applicable to all existing sketch-based approaches, and offers improved regret bounds accordingly.
Additionally, we conduct comprehensive empirical studies using both synthetic and real-world data to validate the accuracy of our theoretical findings and to highlight the effectiveness of our algorithm.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6375
Loading