Connections between Schedule-Free SGD, Accelerated SGD Variants, and Weight Averaging

Published: 10 Oct 2024, Last Modified: 07 Dec 2024NeurIPS 2024 WorkshopEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Schedule-Free SGD, Schedule-Free AdamW, Accelerated SGD, Weight Averaging, Accelerated Methods
TL;DR: We provide precise connections between the recently proposed schedule-free SGD optimizer, accelerated SGD variants and weight averaging.
Abstract: In this work, we uncover precise connections between the recently proposed optimizers such as schedule-free SGD, Lion and the literature on accelerated SGD variants. We show that schedule-free SGD can be precisely understood as accelerated SGD combined with weight averaging. The primary idea behind all these optimizers is decoupling the momentum coefficient from the weight on the gradient in the current step. We provide experimental results on a 150m decoder-only language model supporting our claims by demonstrating that ScheduleFreeAdamW is close in performance to Adam combined with accelerated SGD and weight averaging.
Submission Number: 122
Loading