Peri-LN: Revisiting Normalization Layer in the Transformer Architecture

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-SA 4.0
TL;DR: A comparative analysis of normalization layer in Transformers reveals that Peri-LN—adopted in recent large-scale architecture yet underexplored—effectively balances variance and stabilizes gradients, making it advantageous for large-scale training
Abstract: Selecting a layer normalization (LN) strategy that stabilizes training and speeds convergence in Transformers remains difficult, even for today’s large language models (LLM). We present a comprehensive analytical foundation for understanding how different LN strategies influence training dynamics in large-scale Transformers. Until recently, Pre-LN and Post-LN have long dominated practices despite their limitations in large-scale training. However, several open-source models have recently begun silently adopting a third strategy without much explanation. This strategy places normalization layer **peripherally** around sublayers, a design we term **Peri-LN**. While Peri-LN has demonstrated promising performance, its precise mechanisms and benefits remain almost unexplored. Our in-depth analysis delineates the distinct behaviors of LN strategies, showing how each placement shapes activation variance and gradient propagation. To validate our theoretical insight, we conduct extensive experiments on Transformers up to $3.2$B parameters, showing that Peri-LN consistently achieves more balanced variance growth, steadier gradient flow, and convergence stability. Our results suggest that Peri-LN warrants broader consideration for large-scale Transformer architectures, providing renewed insights into the optimal placement of LN.
Lay Summary: Training today’s large language models is a bit like building a very tall tower of blocks: unless each layer is carefully aligned, the whole structure can wobble or even collapse. One of the “alignment tools” engineers use is **layer normalization**, which keeps the numbers inside the model from drifting too high or too low. Most builders put this tool either **before** or **after** each layer, but both choices have hidden drawbacks—one can weaken the learning signal, while the other can let problematically large numbers sneak through. Our study shines a spotlight on a quieter third option, where we wrap each layer **both before and after** with normalization—an arrangement we call **Peri-LN** (“peri” meaning “around”). By rigorously comparing all three setups across models with up to 3 billion parameters, we show that Peri-LN keeps calculations balanced, prevents training crashes. This simple change could make future language models more reliable, cheaper to train, and accessible to more research groups—helping the field progress without wasting massive computing resources.
Primary Area: Deep Learning->Foundation Models
Keywords: layer normalization, transformers, architecture, pre-training
Submission Number: 10039
Loading