Follow the Approximate Sparse Leader for No-Regret Online Sparse Linear Approximation

Published: 01 Jan 2025, Last Modified: 13 Sept 2025IEEE Signal Process. Lett. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We consider the problem of online sparse linear approximation, where a learner sequentially predicts the best sparse linear approximations of an as yet unobserved sequence of measurements in terms of a few columns of a given measurement matrix. The inherent difficulty of offline sparse recovery makes the online problem challenging as well. In this letter, we propose Follow-The-Approximate-Sparse-Leader, an efficient online meta-policy to address this online problem. Through a detailed theoretical analysis, we prove that under certain assumptions on the measurement sequence, the proposed policy enjoys a data-dependent sublinear upper bound on the static regret, which can range from logarithmic to square-root. Extensive numerical simulations are performed to corroborate the theoretical findings and demonstrate the efficacy of the proposed online policy.
Loading