GRAPH EXPANSION IN PRUNED RECURRENT NEURAL NETWORK LAYERS PRESERVES PERFORMANCE

Published: 19 Mar 2024, Last Modified: 19 Mar 2024Tiny Papers @ ICLR 2024 ArchiveEveryoneRevisionsBibTeXCC BY 4.0
Keywords: RNN, LSTM, Ramanujan Graphs, Spectral Expansion
TL;DR: Spectral Expansion in Recurrent Architectures preserves performance even when not the whole graph but layerwise bipartite graphs are Ramanujan
Abstract: Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high degree of sparsity while maintaining their performance. We prune recurrent networks such as RNNs and LSTMs, maintaining a large spectral gap of the underlying graphs and ensuring their layer-wise expansion properties. We also study the time unfolded recurrent network graphs in terms of the properties of their bipartite layers. Experimental results for the benchmark sequence MNIST, and Google speech command data with noise show that expander graph properties are key to preserving classification accuracy of RNN and LSTM
Submission Number: 189
Loading