Skip RNN: Learning to Skip State Updates in Recurrent Neural NetworksDownload PDF

15 Feb 2018 (modified: 14 Oct 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Recurrent Neural Networks (RNNs) continue to show outstanding performance in sequence modeling tasks. However, training RNNs on long sequences often face challenges like slow inference, vanishing gradients and difficulty in capturing long term dependencies. In backpropagation through time settings, these issues are tightly coupled with the large, sequential computational graph resulting from unfolding the RNN in time. We introduce the Skip RNN model which extends existing RNN models by learning to skip state updates and shortens the effective size of the computational graph. This model can also be encouraged to perform fewer state updates through a budget constraint. We evaluate the proposed model on various tasks and show how it can reduce the number of required RNN updates while preserving, and sometimes even improving, the performance of the baseline RNN models. Source code is publicly available at https://imatge-upc.github.io/skiprnn-2017-telecombcn/.
TL;DR: A modification for existing RNN architectures which allows them to skip state updates while preserving the performance of the original architectures.
Keywords: recurrent neural networks, dynamic learning, conditional computation
Code: [![github](/images/github_icon.svg) imatge-upc/skiprnn-2017-telecombcn](https://github.com/imatge-upc/skiprnn-2017-telecombcn) + [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](https://paperswithcode.com/paper/?openreview=HkwVAXyCW)
Data: [IMDb Movie Reviews](https://paperswithcode.com/dataset/imdb-movie-reviews), [MNIST](https://paperswithcode.com/dataset/mnist), [UCF101](https://paperswithcode.com/dataset/ucf101)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/skip-rnn-learning-to-skip-state-updates-in/code)
11 Replies

Loading