Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Revisiting Natural Gradient for Deep Networks
Razvan Pascanu, Yoshua Bengio
Dec 21, 2013 (modified: Dec 21, 2013)ICLR 2014 conference submissionreaders: everyone
Decision:submitted, no decision
Abstract:The aim of this paper is three-fold. First we show that Hessian-Free (Martens, 2010) and Krylov Subspace Descent (Vinyals and Povey, 2012) can be described as implementations of natural gradient descent due to their use of the extended Gauss-Newton approximation of the Hessian. Secondly we re-derive natural gradient from basic principles, contrasting the difference between two versions of the algorithm found in the neural network literature, as well as highlighting a few differences between natural gradient and typical second order methods. Lastly we show empirically that natural gradient can be robust to overfitting and particularly it can be robust to the order in which the training data is presented to the model.
Enter your feedback below and we'll get back to you as soon as possible.