Gradient flow dynamics of shallow ReLU networks for square loss and orthogonal inputsDownload PDF

Published: 31 Oct 2022, Last Modified: 03 Jul 2024NeurIPS 2022 AcceptReaders: Everyone
Keywords: implicit bias, two-layer neural networks, gradient flow, gradient descent, global convergence, ReLU networks, variation norm, non-convex optimisation
Abstract: The training of neural networks by gradient descent methods is a cornerstone of the deep learning revolution. Yet, despite some recent progress, a complete theory explaining its success is still missing. This article presents, for orthogonal input vectors, a precise description of the gradient flow dynamics of training one-hidden layer ReLU neural networks for the mean squared error at small initialisation. In this setting, despite non-convexity, we show that the gradient flow converges to zero loss and characterise its implicit bias towards minimum variation norm. Furthermore, some interesting phenomena are highlighted: a quantitative description of the initial alignment phenomenon and a proof that the process follows a specific saddle to saddle dynamics.
TL;DR: We precisely describe the gradient flow dynamics of of non-linear neural networks for regression at small initialisation with orthogonal data. We show that it converges to zero loss and characterise its implicit bias towards minimum variation norm.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/gradient-flow-dynamics-of-shallow-relu/code)
13 Replies

Loading