Penalising the biases in norm regularisation enforces sparsity

Published: 21 Sept 2023, Last Modified: 20 Dec 2023NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Neural networks, Min norm interpolators, Sparsity, Representational cost
TL;DR: For one hidden ReLU layer networks with unidimensional data, we characterise the parameters' norm required to represent a function, as well as the sparsity (in kinks) of min norm interpolators.
Abstract: Controlling the parameters' norm often yields good generalisation when training neural networks. Beyond simple intuitions, the relation between regularising parameters' norm and obtained estimators remains theoretically misunderstood. For one hidden ReLU layer networks with unidimensional data, this work shows the parameters' norm required to represent a function is given by the total variation of its second derivative, weighted by a $\sqrt{1+x^2}$ factor. Notably, this weighting factor disappears when the norm of bias terms is not regularised. The presence of this additional weighting factor is of utmost significance as it is shown to enforce the uniqueness and sparsity (in the number of kinks) of the minimal norm interpolator. Conversely, omitting the bias' norm allows for non-sparse solutions. Penalising the bias terms in the regularisation, either explicitly or implicitly, thus leads to sparse estimators.
Supplementary Material: zip
Submission Number: 4474
Loading