Keywords: Optimization, Fine-tuning
TL;DR: We show that gradient heterogeneity explains the better performance of Adam compared to SGD in Transformers.
Abstract: Transformers are challenging to optimize with SGD and typically require adaptive optimizers such as Adam. However, the reasons behind the superior performance of Adam over SGD remain unclear. In this study, we investigate the optimization of transformers by focusing on gradient heterogeneity, defined as the disparity in gradient norms among parameters. Our analysis shows that gradient heterogeneity hinders gradient-based optimization, including SGD, while sign-based optimization, a simplified variant of Adam, is less affected. We further examine gradient heterogeneity in transformers and show that it is influenced by the placement of layer normalization. Experimental results from fine-tuning transformers in both NLP and vision domains validate our theoretical analyses.
Primary Area: transfer learning, meta learning, and lifelong learning
Supplementary Material: zip
Submission Number: 14925
Loading