Abstract: Despite their massive size, successful deep artificial neural networks can
exhibit a remarkably small difference between training and test performance.
Conventional wisdom attributes small generalization error either to properties
of the model family, or to the regularization techniques used during training.
Through extensive systematic experiments, we show how these traditional
approaches fail to explain why large neural networks generalize well in
practice. Specifically, our experiments establish that state-of-the-art
convolutional networks for image classification trained with stochastic
gradient methods easily fit a random labeling of the training data. This
phenomenon is qualitatively unaffected by explicit regularization, and occurs
even if we replace the true images by completely unstructured random noise. We
corroborate these experimental findings with a theoretical construction
showing that simple depth two neural networks already have perfect finite
sample expressivity as soon as the number of parameters exceeds the
number of data points as it usually does in practice.
We interpret our experimental findings by comparison with traditional models.
TL;DR: Through extensive systematic experiments, we show how the traditional approaches fail to explain why large neural networks generalize well in practice, and why understanding deep learning requires rethinking generalization.
Keywords: Deep learning
Conflicts: mit.edu, zju.edu.cn, google.com, berkeley.edu
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 12 code implementations](https://www.catalyzex.com/paper/arxiv:1611.03530/code)
53 Replies
Loading