Exponential Moving Average of Weights in Deep Learning: Dynamics and Benefits

Published: 12 Apr 2024, Last Modified: 12 Apr 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Weight averaging of Stochastic Gradient Descent (SGD) iterates is a popular method for training deep learning models. While it is often used as part of complex training pipelines to improve generalization or serve as a `teacher' model, weight averaging lacks proper evaluation on its own. In this work, we present a systematic study of the Exponential Moving Average (EMA) of weights. We first explore the training dynamics of EMA, give guidelines for hyperparameter tuning, and highlight its good early performance, partly explaining its success as a teacher. We also observe that EMA requires less learning rate decay compared to SGD since averaging naturally reduces noise, introducing a form of implicit regularization. Through extensive experiments, we show that EMA solutions differ from last-iterate solutions. EMA models not only generalize better but also exhibit improved i) robustness to noisy labels, ii) prediction consistency, iii) calibration and iv) transfer learning. Therefore, we suggest that an EMA of weights is a simple yet effective plug-in to improve the performance of deep learning models.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Minor corrections for camera-ready version.
Assigned Action Editor: ~Atsushi_Nitanda1
Submission Number: 1819
Loading