NOISIN: Unbiased Regularization for Recurrent Neural NetworksDownload PDF

12 Feb 2018 (modified: 05 May 2023)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: Recurrent neural networks (RNNs) are powerful models of sequential data. They have been successfully used in domains such as text and speech. However, RNNs are susceptible to overfitting; regularization is important. In this paper we develop NOISIN, a new method for regularizing RNNs. NOISIN injects random noise into the hidden states of the RNN and then maximizes the corresponding marginal likelihood of the data. We show how NOISIN applies to any RNN and we study many different types of noise. NOISIN is unbiased—it preserves the underlying RNN on average. On language modeling benchmarks, NOISIN improves over dropout by as much as 12.2% on the Penn Treebank and 9.4% on the Wikitext-2 dataset.
Keywords: regularization, recurrent neural networks, language modeling
TL;DR: Unbiased noise injection (as defined in this paper) in the hidden units of RNNs improves the generalization capabilities of RNN-based models
1 Reply

Loading