An Empirical Analysis of Deep Network Loss SurfacesDownload PDF

27 Sep 2020 (modified: 08 Dec 2016)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: Analyzing the loss surface of deep neural network trained with different optimization methods
  • Abstract: The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima.
  • Keywords: Deep learning
  • Conflicts: toronto.edu, uoguelph.ca, umontreal.ca, janelia.hhmi.org
11 Replies

Loading