Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
On Fast Dropout and its Applicability to Recurrent Networks
Justin Bayer, Christian Osendorfer, Sebastian Urban, Nutan Chen, Daniela Korhammer, Patrick van der Smagt
Dec 18, 2013 (modified: Dec 18, 2013)ICLR 2014 conference submissionreaders: everyone
Decision:submitted, no decision
Abstract:Recurrent Neural Networks (RNNs) are rich models for the processing of sequential data. Recent work on advancing the state of the art has been focused on the optimization or modelling of RNNs, mostly motivated by adressing the problems of the vanishing and exploding gradients.
The control of overfitting has seen considerably less attention.
This paper contributes to that by analyzing fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective. We show that fast dropout implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss. One consequence of this is the absense of a global weight attractor, which is particularly appealing for RNNs, since the dynamics are not biased towards a certain regime. We positively test the hypothesis that this improves the performance of RNNs on four musical data sets and a natural language processing (NLP) task, on which we achieve state of the art results.
Enter your feedback below and we'll get back to you as soon as possible.