Keywords: Transformer, Training Dynamics, Model Crash
TL;DR: We revisiting the training dynamics of Transformer to tame its training process without using learning rate warmup.
Abstract: Scaling Transformer to a large scale without using some technical tricks such as learning rate warump and an obviously lower learning rate, is an extremely challenging task, and is increasingly gaining more attention. In this paper, we provide a theoretical analysis for the process of training Transformer and reveal a key problem behind model crash phenomenon in the training process, termed *spectral energy concentration* of ${W_q}^{\top} W_k$, which is the reason for a malignant entropy collapse, where ${W_q}$ and $W_k$ are the projection matrices for the query and the key in Transformer, respectively.
To remedy this problem, motivated by *Weyl's Inequality*, we present a novel optimization strategy, \ie, making the weight updating in successive steps steady---if the ratio $\frac{\sigma_{1}(\nabla W_t)}{\sigma_{1}(W_{t-1})}$ is larger than a threshold, we will automatically bound the learning rate to a weighted multiple of $\frac{\sigma_{1}(W_{t-1})}{\sigma_{1}(\nabla W_t)}$, where $\nabla W_t$ is the updating quantity in step $t$. Such an optimization strategy can prevent spectral energy concentration to only a few directions, and thus can avoid malignant entropy collapse which will trigger the model crash. We conduct extensive experiments using ViT, Swin-Transformer and GPT, showing that our optimization strategy can effectively and stably train these (Transformer) models without using learning rate warmup.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9501
Loading