How Does Learning Rate Decay Help Modern Neural Networks?Download PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: We provide another novel explanation of learning rate decay: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns.
Abstract: Learning rate decay (lrDecay) is a \emph{de facto} technique for training modern neural networks. It starts with a large learning rate and then decays it multiple times. It is empirically observed to help both optimization and generalization. Common beliefs in how lrDecay works come from the optimization analysis of (Stochastic) Gradient Descent: 1) an initially large learning rate accelerates training or helps the network escape spurious local minima; 2) decaying the learning rate helps the network converge to a local minimum and avoid oscillation. Despite the popularity of these common beliefs, experiments suggest that they are insufficient in explaining the general effectiveness of lrDecay in training modern neural networks that are deep, wide, and nonconvex. We provide another novel explanation: an initially large learning rate suppresses the network from memorizing noisy data while decaying the learning rate improves the learning of complex patterns. The proposed explanation is validated on a carefully-constructed dataset with tractable pattern complexity. And its implication, that additional patterns learned in later stages of lrDecay are more complex and thus less transferable, is justified in real-world datasets. We believe that this alternative explanation will shed light into the design of better training strategies for modern neural networks.
Keywords: Learning rate decay, Optimization, Explainability, Deep learning, Transfer learning
Data: [CIFAR-10](https://paperswithcode.com/dataset/cifar-10), [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011), [Caltech-256](https://paperswithcode.com/dataset/caltech-256), [Sketch](https://paperswithcode.com/dataset/sketch)
Original Pdf: pdf
4 Replies

Loading