Stable Recurrent ModelsDownload PDF

Published: 21 Dec 2018, Last Modified: 05 May 2023ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Stability is a fundamental property of dynamical systems, yet to this date it has had little bearing on the practice of recurrent neural networks. In this work, we conduct a thorough investigation of stable recurrent models. Theoretically, we prove stable recurrent neural networks are well approximated by feed-forward networks for the purpose of both inference and training by gradient descent. Empirically, we demonstrate stable recurrent models often perform as well as their unstable counterparts on benchmark sequence tasks. Taken together, these findings shed light on the effective power of recurrent networks and suggest much of sequence learning happens, or can be made to happen, in the stable regime. Moreover, our results help to explain why in many cases practitioners succeed in replacing recurrent models by feed-forward models.
Keywords: stability, gradient descent, non-convex optimization, recurrent neural networks
TL;DR: Stable recurrent models can be approximated by feed-forward networks and empirically perform as well as unstable models on benchmark tasks.
Data: [Penn Treebank](https://paperswithcode.com/dataset/penn-treebank), [WikiText-2](https://paperswithcode.com/dataset/wikitext-2)
16 Replies

Loading