Bayesian Sparsification of Gated Recurrent Neural NetworksDownload PDF

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: Bayesian methods have been successfully applied to sparsify weights of neural networks and to remove structure units from the networks, e. g. neurons. We apply and further develop this approach for gated recurrent architectures. Specifically, in addition to sparsification of individual weights and neurons, we propose to sparsify preactivations of gates and information flow in LSTM. It makes some gates and information flow components constant, speeds up forward pass and improves compression. Moreover, the resulting structure of gate sparsity is interpretable and depends on the task.
TL;DR: We propose to sparsify preactivations of gates and information flow in LSTM to make them constant and boost the neuron sparsity level
Keywords: recurrent neural networks, Bayesian methods, sparsification, compression
8 Replies

Loading