An Empirical Analysis of Deep Network Loss SurfacesDownload PDF

24 Nov 2024 (modified: 21 Jul 2022)Submitted to ICLR 2017Readers: Everyone
Abstract: The training of deep neural networks is a high-dimension optimization problem with respect to the loss function of a model. Unfortunately, these functions are of high dimension and non-convex and hence difficult to characterize. In this paper, we empirically investigate the geometry of the loss functions for state-of-the-art networks with multiple stochastic optimization methods. We do this through several experiments that are visualized on polygons to understand how and when these stochastic optimization methods find minima.
TL;DR: Analyzing the loss surface of deep neural network trained with different optimization methods
Keywords: Deep learning
Conflicts: toronto.edu, uoguelph.ca, umontreal.ca, janelia.hhmi.org
11 Replies

Loading