Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
NOISIN: Unbiased Regularization for Recurrent Neural Networks
Adji B. Dieng, Rajesh Ranganath, Jaan Altosaar, David M. Blei
Feb 12, 2018 (modified: Feb 12, 2018)ICLR 2018 Workshop Submissionreaders: everyone
Abstract:Recurrent neural networks (RNNs) are powerful models of sequential data. They have been successfully used in domains such as text and speech. However, RNNs are susceptible to overfitting; regularization is important. In this paper we develop NOISIN, a new method for regularizing RNNs. NOISIN injects random noise into the hidden states of the RNN and then maximizes the corresponding marginal likelihood of the data. We show how NOISIN applies to any RNN and we study many different types of noise. NOISIN is unbiased—it preserves the underlying RNN on average. On language modeling benchmarks, NOISIN improves over dropout by as much as 12.2% on the Penn Treebank and 9.4% on the Wikitext-2 dataset.
TL;DR:Unbiased noise injection (as defined in this paper) in the hidden units of RNNs improves the generalization capabilities of RNN-based models
Keywords:regularization, recurrent neural networks, language modeling
Enter your feedback below and we'll get back to you as soon as possible.