Invariant Low-Dimensional Subspaces in Gradient Descent for Learning Deep Linear Networks

Published: 20 Nov 2023, Last Modified: 01 Dec 2023CPAL 2024 (Recent Spotlight Track) PosterEveryoneRevisionsBibTeX
Keywords: implicit bias, low dimensional structures, deep linear networks
Abstract: Over the past few years, an extensively studied phenomenon in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we investigate this phenomenon by narrowing our focus to deep linear networks. Through our analysis, we reveal a surprising ``law of parsimony'' in the learning dynamics when the data possesses low-dimensional structures. Specifically, we show that the evolution of gradient descent starting from orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, despite the fact that all weight parameters are updated throughout training. This simplicity in learning dynamics could have significant implications for both efficient training and a better understanding of deep networks. First, the analysis enables us to considerably improve training efficiency by taking advantage of the low-dimensional structure in learning dynamics. We can construct smaller, equivalent deep linear networks without sacrificing the benefits associated with the wider counterparts. Second, it allows us to better understand deep representation learning by elucidating the linear progressive separation and concentration of representations from shallow to deep layers.
Track Confirmation: Yes, I am submitting to the recent spotlight track.
Submission Number: 31
Loading