## Gradient descent aligns the layers of deep linear networks

27 Sept 2018, 22:38 (modified: 10 Feb 2022, 11:32)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: implicit regularization, alignment of layers, deep linear networks, gradient descent, separable data
Abstract: This paper establishes risk convergence and asymptotic weight matrix alignment --- a form of implicit regularization --- of gradient flow and gradient descent when applied to deep linear networks on linearly separable data. In more detail, for gradient flow applied to strictly decreasing loss functions (with similar results for gradient descent with particular decreasing step sizes): (i) the risk converges to 0; (ii) the normalized i-th weight matrix asymptotically equals its rank-1 approximation u_iv_i^T; (iii) these rank-1 matrices are aligned across layers, meaning |v_{i+1}^T u_i| -> 1. In the case of the logistic loss (binary cross entropy), more can be said: the linear function induced by the network --- the product of its weight matrices --- converges to the same direction as the maximum margin solution. This last property was identified in prior work, but only under assumptions on gradient descent which here are implied by the alignment phenomenon.
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10)
10 Replies