Root Mean Square Layer Normalization

Anonymous

Dec 11, 2018 OpenReview Anonymous Preprint Blind Submission readers: everyone Show Bibtex
  • Abstract: Layer normalization (LayerNorm) has been successfully applied to various deep neural networks to help stabilize training and boost model convergence because of its capability in handling re-centering and re-scaling of both inputs and weight matrix. However, the computational overhead introduced by LayerNorm makes these improvements expensive and significantly slows the underlying network, e.g. RNN in particular. In this paper, we hypothesize that re-centering invariance in LayerNorm is dispensable and propose root mean square layer normalization, or \textit{RMSNorm}. RMSNorm regularizes the summed inputs to a neuron in one layer according to root mean square (RMS), giving the model re-scaling invariance property and implicit learning rate adaptation ability. RMSNorm is computationally simpler and thus more efficient than LayerNorm. We also present partial RMSNorm, or \textit{$p$RMSNorm} where the RMS is estimated from $p$\% of the summed inputs without breaking the above properties. Extensive experiments on several tasks using diverse network architectures show that RMSNorm achieves comparable performance against LayerNorm but reduces the running time by 7\%$\sim$64\% on different models. We will release our source code soon.
  • Keywords: layer normalization, root mean square
  • TL;DR: RMSNorm
0 Replies

Loading