Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Factorization tricks for LSTM networks
Oleksii Kuchaiev, Boris Ginsburg
Feb 17, 2017 (modified: Mar 13, 2017)ICLR 2017 workshop submissionreaders: everyone
Abstract:Large Long Short-Term Memory (LSTM) networks have tens of millions of parameters and they are very expensive to train. We present two simple ways of reducing the number of parameters in LSTM network: the first one is ”matrix factorization by design” of LSTM matrix into the product of two smaller matrices, and the second one is partitioning of LSTM matrix, its inputs and states into the independent groups. Both approaches allow us to train large LSTM networks significantly faster to the state-of the art perplexity. On the One Billion Word Benchmark we improve single model perplexity down to 24.29.
TL;DR:Achieving new single model state-of-the-art (24.29) perplexity on the One Billion Word Benchmark, using new cell structures.
Keywords:Natural language processing, Deep learning
Enter your feedback below and we'll get back to you as soon as possible.