Low-Rank Bias, Weight Decay, and Model Merging in Neural Networks
TL;DR: We show that weight decay enables low-rank structure in weight matrices of deep homogeneous neural networks. We also show that the low-rank structure enables effective model merging through summing weight matrices of neural networks.
Abstract: We explore the low-rank structure of the weight matrices in neural networks
at the stationary points (limiting solutions of optimization algorithms)
with $L2$ regularization (also known as weight decay). We show several
properties of such deep neural networks, induced by $L2$ regularization.
In particular, for a stationary point we show alignment of the
parameters and the gradient, norm preservation across layers, and low-rank
bias: properties previously known in the context of solutions of gradient descent/flow type algorithms. Experiments
show that the assumptions made in the analysis only mildly affect the
observations.
In addition, we investigate a multitask learning phenomenon enabled by $L2$
regularization and low-rank bias. In particular, we show that if two networks
are trained, such that the inputs in the training set of one network are
approximately orthogonal to the inputs in the training set of the other
network, the new network obtained by simply summing the weights of the two
networks will perform as well on both training sets as the respective
individual networks. We demonstrate this for shallow ReLU neural networks
trained by gradient descent, as well as deep linear networks trained by gradient flow.
Submission Number: 211
Loading