Training RNNs as Fast as CNNsDownload PDF

15 Feb 2018 (modified: 10 Feb 2022)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Common recurrent neural network architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU) architecture, a recurrent unit that simplifies the computation and exposes more parallelism. In SRU, the majority of computation for each step is independent of the recurrence and can be easily parallelized. SRU is as fast as a convolutional layer and 5-10x faster than an optimized LSTM implementation. We study SRUs on a wide range of applications, including classification, question answering, language modeling, translation and speech recognition. Our experiments demonstrate the effectiveness of SRU and the trade-off it enables between speed and performance.
Keywords: recurrent neural networks, natural language processing
Data: [MPQA Opinion Corpus](https://paperswithcode.com/dataset/mpqa-opinion-corpus), [SQuAD](https://paperswithcode.com/dataset/squad), [SST](https://paperswithcode.com/dataset/sst)
22 Replies

Loading