Keywords: Non-convex Optimization; Low-rank Matrix Recovery; Riemannian Optimization; Virtual Sequences; Linear Gaussian Measurements; Sample Complexity
TL;DR: We bridge the gap between non-convex and convex matrix sensing, achieving optimal sample complexity and faster convergence rate in the general setting.
Abstract: We study the problem of recovering an unknown $d_1 \times d_2$ rank-$r$ matrix from $m$ random linear measurements. Convex methods achieve the optimal sample complexity $m = \Omega(r(d_1 + d_2))$ but are computationally expensive. Non-convex approaches, while more computationally efficient, often require suboptimal sample complexity $m = \Omega(r^2(d_1 + d_2))$. Recent advance achieves $m = \Omega(rd_1)$ for a non-convex approach but relies on the restrictive assumption of positive semidefinite (PSD) matrices and suffers from slow convergence in ill-conditioned settings. Bridging this gap, we show that Riemannian gradient descent (RGD) achieves both optimal sample complexity and computational efficiency without requiring the PSD assumption. Specifically, for Gaussian measurements, RGD exactly recovers the low-rank matrix with $m = \Omega(r(d_1 + d_2))$, matching the information-theoretic lower bound, and converges linearly to the global minimum with an arbitrarily small convergence rate.
Latex Source Code: zip
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission782/Authors, auai.org/UAI/2025/Conference/Submission782/Reproducibility_Reviewers
Submission Number: 782
Loading