Abstract: An emerging design principle in deep learning is that each layer of a deep
artificial neural network should be able to easily express the identity
transformation. This idea not only motivated various normalization techniques,
such as batch normalization, but was also key to the immense success of
residual networks.
In this work, we put the principle of identity parameterization on a more
solid theoretical footing alongside further empirical progress. We first
give a strikingly simple proof that arbitrarily deep linear residual networks
have no spurious local optima. The same result for feed-forward networks in
their standard parameterization is substantially more delicate. Second, we
show that residual networks with ReLu activations have universal finite-sample
expressivity in the sense that the network can represent any function of its
sample provided that the model has more parameters than the sample size.
Directly inspired by our theory, we experiment with a radically simple
residual architecture consisting of only residual convolutional layers and
ReLu activations, but no batch normalization, dropout, or max pool. Our model
improves significantly on previous all-convolutional networks on the CIFAR10,
CIFAR100, and ImageNet classification benchmarks.
TL;DR: Emerging theory explaining residual networks alongside new empirical progress
Conflicts: google.com, berkeley.edu;cs.princeton.edu
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1611.04231/code)
12 Replies
Loading