New Subset Selection Algorithms for Low Rank Approximation: Offline and Online

Published: 01 Jan 2023, Last Modified: 14 May 2024STOC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Subset selection for the rank k approximation of an n× d matrix A offers improvements in the interpretability of matrices, as well as a variety of computational savings. This problem is well-understood when the error measure is the Frobenius norm, with various tight algorithms known even in challenging models such as the online model, where an algorithm must select the column subset irrevocably when the columns arrive one by one. In sharp contrast, when the error measure is replaced by other matrix losses, optimal trade-offs between the subset size and approximation quality have not been settled, even in the standard offline setting. We give a number of results towards closing these gaps. In the offline setting, we achieve nearly optimal bicriteria algorithms in two settings. First, we remove a √k factor from a prior result of Song–Woodruff–Zhong when the loss function is any entrywise loss with an approximate triangle inequality and at least linear growth, which includes, e.g., the Huber loss. Our result is tight when applied to the ℓ1 loss. We give a similar improvement for the entrywise ℓp loss for p>2, improving a previous distortion of Õ(k1−1/p) to O(k1/2−1/p). We show this is tight for p = ∞, while for 2<p<∞, we give the first bicriteria algorithms for (1+ε)-approximate entrywise ℓp low rank approximation. Our results come from a general technique which improves distortions by replacing the use of a well-conditioned basis with a slightly larger spanning set for which any vector can be expressed as a linear combination with small Euclidean norm. This idea may be of independent interest and we show, for example, that it also gives the first oblivious ℓp subspace embeddings for 1≤ p < 2 with Õ(d1/p) distortion, which is nearly optimal and improves the previously best known Õ(d) and closes a long line of work. In the online setting, we give the first online subset selection algorithm for ℓp subspace approximation and entrywise ℓp low rank approximation by showing how to implement the classical sensitivity sampling algorithm online, which is challenging due to the sequential nature of sensitivity sampling. Our main technique is an online algorithm for detecting when an approximately optimal subspace changes substantially. We also give new related results for the online setting, including online coresets for Euclidean (k,p) clustering as well as an online active regression algorithm making Θ(dp/2/εp−1) queries, answering open questions of Musco–Musco–Woodruff–Yasuda and Chen–Li–Sun.
Loading