Keywords: Low-Rank Approximation, Online Algorithms
Abstract: We study the problem of online low-rank approximation, where at each time step an algorithm receives a new vector and must maintain a rank-$k$ subspace that serves as a compressed representation of the data. The specific formulation we use is the weighted low-rank approximation (WLRA) objective: at each step, the algorithm incurs loss equal to the weighted squared reconstruction error of the incoming point with respect to its current subspace. The goal is to minimize regret against the best rank-$k$ subspace in hindsight, whose reconstruction cost we denote by $\mathcal{C}$. We first establish an online-to-offline reduction: the existence of an efficient no-regret online algorithm for WLRA would imply an efficient approximation scheme for the offline problem, which is unlikely under standard complexity assumptions. Although WLRA is APX-hard in the offline setting, we show that the standard Multiplicative Weights Update Algorithm (MWUA) can achieve sublinear regret in expectation with respect to a $(1+\varepsilon)$-multiplicative approximation of $\mathcal{C}$. Specifically, we use an adaptive spherical hierarchical region decomposition that iteratively refines the $d$-dimensional unit sphere $\mathbb{S}^d$ based on the density of the data. At each split, a region is partitioned into $2^{d-1}$ sub-regions, producing a hierarchal tree decomposition, while our algorithm maintains centroids of the points in each region as the set of experts. Finally, we complement our theoretical results with empirical evaluations that demonstrate the efficiency of our algorithm compared to previous baselines.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 21828
Loading