Fixup Initialization: Residual Learning Without NormalizationDownload PDF

Published: 21 Dec 2018, Last Modified: 22 Oct 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.
Keywords: deep learning, residual networks, initialization, batch normalization, layer normalization
TL;DR: All you need to train deep residual networks is a good initialization; normalization layers are not necessary.
Code: [![Papers with Code](/images/pwc_icon.svg) 7 community implementations](https://paperswithcode.com/paper/?openreview=H1gsz30cKX)
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [SVHN](https://paperswithcode.com/dataset/svhn)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/arxiv:1901.09321/code)
26 Replies

Loading