Global Convergence of Four-Layer Matrix Factorization under Random Initialization

ICLR 2026 Conference Submission13237 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep learning theory, matrix factorization
TL;DR: We give the first global convergence guarantee of gradient descent for optimizing deep learning networks with more than two layers beyond the NTK regime.
Abstract: Gradient descent dynamics on the deep matrix factorization problem is extensively studied as a simplified theoretical model for deep neural networks. Although the convergence theory for two-layer matrix factorization is well-established, no global convergence guarantee for general deep matrix factorization under random initialization has been established to date. To bridge this gap, we provide a polynomial-time global convergence guarantee for randomly initialized gradient descent on four-layer matrix factorization, given certain conditions on the target matrix and a standard balanced regularization term. Our analysis employs new techniques to show saddle-avoidance properties of gradient decent dynamics, and extends previous theories to characterize the eigenvalue change of layer weights.
Primary Area: optimization
Submission Number: 13237
Loading