MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large-batch training has become a cornerstone in accelerating the training of deep neural networks, yet it poses challenges in optimization and generalization. Existing optimizers like AdamW present performance degradation during language models' large-batch training, due to the information bottleneck in attention layers caused by the sharp increase of max attention logit. While the LAMB optimizer partially addresses this issue, some attention layers still face this issue. The reason is that $l_2$-norm-based trust ratios in LAMB are less effective in directly influencing the max value of query/key weights. Furthermore, the weight-wise trust ratio in LAMB is error-prone as it overlooks relationships of weight values within rows or columns. Building on these observations, we propose a novel optimizer, MERIT, which leverages the max-norm to calculate the trust ratio to constrain the max attention logit more effectively. Moreover, we further construct element-wise trust ratios to provide more robust update scaling by focusing on local weight structures. Extensive experiments of large-batch training across various sizes of GPT-2 models demonstrate the superior performance of MERIT. Notably, during the training of GPT-2 Medium, MERIT enables a 6k batch size without any performance degradation compared to the standard batch size (480) with 48B training tokens. This work highlights the importance of considering the max attention logit and finer-granularity trust ratio in large-batch training. It successfully improves the training stability and paves the way for larger batch usage, enabling faster development and iteration of large language models. Code is available at https://github.com/NUS-HPC-AI-Lab/MERIT.
Lay Summary: Training large AI language models efficiently is challenging because using bigger batches of data often leads to unstable or lower-quality results. Current training methods (like AdamW or LAMB) struggle with this because they can’t properly control sharp spikes in attention logits—critical parts of how these models process information. While LAMB partly fixes this, it still misses key details, like how to limit extreme values in certain weights or account for relationships between neighboring row/column values in the model’s parameters. To solve this, we developed **MERIT**, a new training method that: **Controls extreme values** by using a "max-aware" approach to adjust updates, preventing attention values from spiking. **Focuses on local attention patterns** in the model’s weights to make updates more precise and stable. In tests with GPT-2 models, MERIT allowed training with larger batch sizes than AdamW and LAMB without sacrificing performance. This means models can be trained faster, accelerating progress in AI development. **Why it matters:** By addressing overlooked details in how training updates are scaled, MERIT improves stability and opens the door to training larger AI models more efficiently—a critical step for advancing technologies like large language models.
Link To Code: https://github.com/NUS-HPC-AI-Lab/MERIT
Primary Area: Optimization
Keywords: large-batch training, max attention logit, language models
Submission Number: 4347
Loading