SGD and Weight Decay Secretly Minimize the Rank of Your Neural Network

Published: 11 Oct 2024, Last Modified: 03 Dec 2024M3L PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inductive biases, low-rank bias, SGD, implicit regularization
Abstract: We investigate the inherent bias of Stochastic Gradient Descent (SGD) toward learning low-rank weight matrices during the training of deep neural networks. Our results demonstrate that training with mini-batch SGD and weight decay induces a bias toward rank minimization in the weight matrices. Specifically, we show both theoretically and empirically that this bias becomes more pronounced with smaller batch sizes, higher learning rates, or stronger weight decay. Additionally, we predict and empirically confirm that weight decay is essential for this bias to occur. Unlike previous literature, our analysis does not rely on assumptions about the data, convergence, or optimality of the weight matrices, making it applicable to a wide range of neural network architectures of any width or depth. Finally, we empirically explore the connection between this bias and generalization, finding that it has a marginal effect on the test performance.
Is Neurips Submission: No
Submission Number: 7
Loading