Factorization tricks for LSTM networksDownload PDF

25 Apr 2024 (modified: 13 Mar 2017)ICLR 2017 workshop submissionReaders: Everyone
Abstract: Large Long Short-Term Memory (LSTM) networks have tens of millions of parameters and they are very expensive to train. We present two simple ways of reducing the number of parameters in LSTM network: the first one is ”matrix factorization by design” of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 24.29.
TL;DR: Achieving new single model state-of-the-art (24.29) perplexity on the One Billion Word Benchmark, using new cell structures.
Conflicts: none
Keywords: Natural language processing, Deep learning
5 Replies

Loading