Keywords: learning theory, high-dimensional robust statistics, non-convex optimization, sparse estimation
TL;DR: We show that outlier-robust sparse estimation tasks, specifically robust sparse mean estimation and robust sparse PCA, can be solved efficiently using first-order methods.
Abstract: We explore the connection between outlier-robust high-dimensional statistics and non-convex optimization in the presence of sparsity constraints, with a focus on the fundamental tasks of robust sparse mean estimation and robust sparse PCA. We develop novel and simple optimization formulations for these problems such that any approximate stationary point of the associated optimization problem yields a near-optimal solution for the underlying robust estimation task. As a corollary, we obtain that any first-order method that efficiently converges to stationarity yields an efficient algorithm for these tasks. The obtained algorithms are simple, practical, and succeed under broader distributional assumptions compared to prior work.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/outlier-robust-sparse-estimation-via-non/code)
8 Replies
Loading