Self-Regularity of Non-Negative Output Weights for Overparameterized Two-Layer Neural Networks

Published: 01 Jan 2022, Last Modified: 29 Sept 2024IEEE Trans. Signal Process. 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We consider the problem of finding a two-layer neural network with sigmoid, rectified linear unit (ReLU), or binary step activation functions that “fits” a training data set as accurately as possible as quantified by the training error; and study the following question: does a low training error guarantee that the norm of the output layer (outer norm) itself is small? We answer affirmatively this question for the case of non-negative output weights. Using a simple covering number argument, we establish that under quite mild distributional assumptions on the input/label pairs; any such network achieving a small training error on polynomially many data necessarily has a well-controlled outer norm. Notably, our results (a) have a polynomial (in $d$ ) sample complexity, (b) are independent of the number of hidden units (which can potentially be very high), (c) are oblivious to the training algorithm; and (d) require quite mild assumptions on the data (in particular the input vector $X\in \mathbb {R}^{d}$ need not have independent coordinates). We then leverage our bounds to establish generalization guarantees for such networks through fat-shattering dimension , a scale-sensitive measure of the complexity class that the network architectures we investigate belong to. Notably, our generalization bounds also have good sample complexity (polynomials in $d$ with a low degree), and are in fact near-linear for some important cases of interest.
Loading