Gradient descent in matrix factorization: Understanding large initialization

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Gradient descent, matrix factorization, large initialization, trajectory analysis, incremental learning
Abstract: Gradient Descent (GD) has been proven effective in solving various matrix factorization problems. However, its optimization behavior with large initial values remains less understood. To address this gap, this paper presents a novel theoretical framework for examining the convergence trajectory of GD with a large initialization. The framework is grounded in signal-to-noise ratio concepts and inductive arguments. The results uncover an implicit incremental learning phenomenon in GD and offer a deeper understanding of its performance in large initialization scenarios.
List Of Authors: Chen, Hengchao and Chen, Xin and Elmasri, Mohamad and Sun, Qiang
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 528
Loading