Entropy-SGD: Biasing Gradient Descent Into Wide Valleys

Pratik Chaudhari, Anna Choromanska, Stefano Soatto, Yann LeCun, Carlo Baldassi, Christian Borgs, Jennifer Chayes, Levent Sagun, Riccardo Zecchina

Nov 04, 2016 (modified: Mar 04, 2017) ICLR 2017 conference submission readers: everyone
  • Abstract: This paper proposes a new optimization algorithm called Entropy-SGD for training deep neural networks that is motivated by the local geometry of the energy landscape. Local extrema with low generalization error have a large proportion of almost-zero eigenvalues in the Hessian with very few positive or negative eigenvalues. We leverage upon this observation to construct a local-entropy-based objective function that favors well-generalizable solutions lying in large flat regions of the energy landscape, while avoiding poorly-generalizable solutions located in the sharp valleys. Conceptually, our algorithm resembles two nested loops of SGD where we use Langevin dynamics in the inner loop to compute the gradient of the local entropy before each update of the weights. We show that the new objective has a smoother energy landscape and show improved generalization over SGD using uniform stability, under certain assumptions. Our experiments on convolutional and recurrent neural networks demonstrate that Entropy-SGD compares favorably to state-of-the-art techniques in terms of generalization error and training time.
  • TL;DR: This paper focuses on developing new optimization tools for deep learning that are tailored to exploit the local geometric properties of the objective function.
  • Keywords: Deep learning, Optimization
  • Conflicts: ucla.edu, cs.ucla.edu, nyu.edu, microsoft.com, gwu.edu, nyu.edu, cims.nyu.edu, columbia.edu, facebook.com, microsoft.com, polito.it