Kronecker Recurrent UnitsDownload PDF

12 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Keywords: Recurrent neural network, Vanishing and exploding gradients, Parameter efficiency, Kronecker matrices, Soft unitary constraint
TL;DR: Out work presents a Kronecker factorization of recurrent weight matrices for parameter efficient and well conditioned recurrent neural networks.
Abstract: Our work addresses two important issues with recurrent neural networks: (1) they are over-parameterized, and (2) the recurrent weight matrix is ill-conditioned. The former increases the sample complexity of learning and the training time. The latter causes the vanishing and exploding gradient problem. We present a flexible recurrent neural network model called Kronecker Recurrent Units (KRU). KRU achieves parameter efficiency in RNNs through a Kronecker factored recurrent matrix. It overcomes the ill-conditioning of the recurrent matrix by enforcing soft unitary constraints on the factors. Thanks to the small dimensionality of the factors, maintaining these constraints is computationally efficient. Our experimental results on seven standard data-sets reveal that KRU can reduce the number of parameters by three orders of magnitude in the recurrent weight matrix compared to the existing recurrent models, without trading the statistical performance. These results in particular show that while there are advantages in having a high dimensional recurrent space, the capacity of the recurrent part of the model can be dramatically reduced.
1 Reply

Loading