Expressive power of recurrent neural networksDownload PDF

15 Feb 2018 (modified: 21 Apr 2024)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Deep neural networks are surprisingly efficient at solving practical tasks, but the theory behind this phenomenon is only starting to catch up with the practice. Numerous works show that depth is the key to this efficiency. A certain class of deep convolutional networks – namely those that correspond to the Hierarchical Tucker (HT) tensor decomposition – has been proven to have exponentially higher expressive power than shallow networks. I.e. a shallow network of exponential width is required to realize the same score function as computed by the deep architecture. In this paper, we prove the expressive power theorem (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks – ones that correspond to the Tensor Train (TT) decomposition. This means that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer. Using theoretical results on the relation between the tensor decompositions we compare expressive powers of the HT- and TT-Networks. We also implement the recurrent TT-Networks and provide numerical evidence of their expressivity.
TL;DR: We prove the exponential efficiency of recurrent-type neural networks over shallow networks.
Keywords: Recurrent Neural Networks, Tensor Train, tensor decompositions, expressive power
Data: [CIFAR-10](
Code: [![Papers with Code](/images/pwc_icon.svg) 2 community implementations](
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](
8 Replies