A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch SizesDownload PDF

21 May 2021 (modified: 25 Nov 2024)NeurIPS 2021 SubmittedReaders: Everyone
Keywords: neural networks, deep learning, neural network optimization, hyperparameter tuning, optimizer comparison
TL;DR: We retune the Nesterov/Adam optimizers on pipelines where LARS/LAMB are commonly used and achieve similar or better performance, providing competitive baselines for the large batch training setting.
Abstract: Recently the LARS and LAMB optimizers have been proposed for training neural networks faster using large batch sizes. LARS and LAMB add layer-wise normalization to the update rules of Heavy-ball momentum and Adam, respectively, and have become popular in prominent benchmarks and deep learning libraries. However, without fair comparisons to standard optimizers, it remains an open question whether LARS and LAMB have any benefit over traditional, generic algorithms. In this work we demonstrate that standard optimization algorithms such as Nesterov momentum and Adam can match or exceed the results of LARS and LAMB at large batch sizes. Our results establish new, stronger baselines for future comparisons at these batch sizes and shed light on the difficulties of comparing optimizers for neural network training more generally.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 8 code implementations](https://www.catalyzex.com/paper/a-large-batch-optimizer-reality-check/code)
20 Replies

Loading