Learning a Neuron by a Shallow ReLU Network: Dynamics and Implicit Bias for Correlated Inputs

Published: 21 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: implicit bias, implicit regularization, training dynamics, ReLU networks, gradient flow, theoretical analysis
TL;DR: We perform a detailed analysis of dynamics, convergence, and implicit bias of gradient flow from a small initialisation, for learning a neuron by a one-hidden layer ReLU network, when the training points are correlated with the teacher.
Abstract: We prove that, for the fundamental regression task of learning a single neuron, training a one-hidden layer ReLU network of any width by gradient flow from a small initialisation converges to zero loss and is implicitly biased to minimise the rank of network parameters. By assuming that the training points are correlated with the teacher neuron, we complement previous work that considered orthogonal datasets. Our results are based on a detailed non-asymptotic analysis of the dynamics of each hidden neuron throughout the training. We also show and characterise a surprising distinction in this setting between interpolator networks of minimal rank and those of minimal Euclidean norm. Finally we perform a range of numerical experiments, which corroborate our theoretical findings.
Supplementary Material: pdf
Submission Number: 5052