SGDR: Stochastic Gradient Descent with Warm RestartsDownload PDF

Published: 06 Feb 2017, Last Modified: 22 Oct 2023ICLR 2017 PosterReaders: Everyone
Abstract: Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14\% and 16.21\%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at \\ \url{https://github.com/loshchil/SGDR}
TL;DR: We propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance.
Conflicts: uni-freiburg.de
Keywords: Deep learning, Optimization
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 17 code implementations](https://www.catalyzex.com/paper/arxiv:1608.03983/code)
10 Replies

Loading