Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Eigenvalues of the Hessian in Deep Learning: Singularity and Beyond
Levent Sagun, Leon Bottou, Yann LeCun
Nov 04, 2016 (modified: Jan 20, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:We look at the eigenvalues of the Hessian of a loss function before and after training. The eigenvalue distribution is seen to be composed of two parts, the bulk which is concentrated around zero, and the edges which are scattered away from zero. We present empirical evidence for the bulk indicating how over-parametrized the system is, and for the edges that depend on the input data.
TL;DR:The eigenvalues of the Hessian of loss functions in deep learning have two components: singular bulk at zero that depends on the over-parametrization, and the discrete part that depends on the data.
Keywords:Optimization, Deep learning
Enter your feedback below and we'll get back to you as soon as possible.