Keywords: Theory of neural networks, Bayes-optimal learning, non-convex optimization, statistical physics, high-dimensional statistics
TL;DR: We show how to compute exact asymptotic predictions for the minimal generalization error reachable in learning an extensive-width shallow neural network, with a number of data samples quadratic in the dimension.
Abstract: We consider the problem of learning a target function corresponding to a single
hidden layer neural network, with a quadratic activation function after the first layer,
and random weights. We consider the asymptotic limit where the input dimension
and the network width are proportionally large. Recent work [Cui et al., 2023]
established that linear regression provides Bayes-optimal test error to learn such
a function when the number of available samples is only linear in the dimension.
That work stressed the open challenge of theoretically analyzing the optimal test
error in the more interesting regime where the number of samples is quadratic in
the dimension. In this paper, we solve this challenge for quadratic activations and
derive a closed-form expression for the Bayes-optimal test error. We also provide an
algorithm, that we call GAMP-RIE, which combines approximate message passing
with rotationally invariant matrix denoising, and that asymptotically achieves the
optimal performance. Technically, our result is enabled by establishing a link
with recent works on optimal denoising of extensive-rank matrices and on the
ellipsoid fitting problem. We further show empirically that, in the absence of
noise, randomly-initialized gradient descent seems to sample the space of weights,
leading to zero training loss, and averaging over initialization leads to a test error
equal to the Bayes-optimal one.
Primary Area: Learning theory
Submission Number: 11045
Loading