Keywords: Transformer, Normalization, Layer Normalization, RMSNorm, Efficient Machine Learning
TL;DR: We unify LayerNorm and RMSNorm in pre-normalization Transformers and propose equivalent and efficient Transformer variants.
Abstract: Transformers have achieved great success in machine learning applications.
Normalization techniques, such as Layer Normalization (LayerNorm, LN) and Root Mean Square Normalization (RMSNorm), play a critical role in accelerating and stabilizing the training of Transformers.
While LayerNorm recenters and rescales input vectors, RMSNorm only rescales the vectors by their RMS value.
Despite being more computationally efficient, RMSNorm may compromise the representation ability of Transformers.
There is currently no consensus regarding the preferred normalization technique, as some models employ LayerNorm while others utilize RMSNorm, especially in recent large language models.
It is challenging to convert Transformers with one normalization to the other type.
While there is an ongoing disagreement between the two normalization types,
we propose a solution to unify two mainstream Transformer architectures, Pre-LN and Pre-RMSNorm Transformers.
By removing the inherent redundant mean information in the main branch of Pre-LN Transformers, we can reduce LayerNorm to RMSNorm, achieving higher efficiency.
We further propose the Compressed RMSNorm (CRMSNorm) and Pre-CRMSNorm Transformer based on a lossless compression of the zero-mean vectors.
We formally establish the equivalence of Pre-LN, Pre-RMSNorm, and Pre-CRMSNorm Transformer variants in both training and inference.
It implies that Pre-LN Transformers can be substituted with Pre-(C)RMSNorm counterparts at almost no cost, offering the same arithmetic functionality along with free efficiency improvement.
Experiments demonstrate that we can reduce the training and inference time of Pre-LN Transformers by 1% - 10%.
Supplementary Material: zip
Submission Number: 1724
Loading