Exploring loss function topology with cyclical learning rates

Leslie N. Smith, Nicholay Topin

Feb 14, 2017 (modified: Feb 16, 2017) ICLR 2017 workshop submission readers: everyone
  • Abstract: We present observations and discussion of previously unreported phenomena discovered while training residual networks. The goal of this work is to better understand the nature of neural networks through the examination of these new empirical results. These behaviors were identified through the application of Cyclical Learning Rates (CLR) and linear network interpolation. Among these behaviors are counterintuitive increases and decreases in training loss and instances of rapid training. For example, we demonstrate how CLR can produce greater testing accuracy than traditional training despite using large learning rates.
  • TL;DR: We present empirical observations and discussion of unusual phenomena discovered while training Residual networks.
  • Keywords: Deep learning
  • Conflicts: nrl.navy.mil, umbc.edu

Loading