Characterizing Sparse Connectivity Patterns in Neural Networks

Sourya Dey, Kuan-Wen Huang, Peter A. Beerel, Keith M. Chugg

Feb 15, 2018 (modified: Feb 15, 2018) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: We propose a novel way of reducing the number of parameters in the storage-hungry fully connected layers of a neural network by using pre-defined sparsity, where the majority of connections are absent prior to starting training. Our results indicate that convolutional neural networks can operate without any loss of accuracy at less than 0.5% classification layer connection density, or less than 5% overall network connection density. We also investigate the effects of pre-defining the sparsity of networks with only fully connected layers. Based on our sparsifying technique, we introduce the `scatter' metric to characterize the quality of a particular connection pattern. As proof of concept, we show results on CIFAR, MNIST and a new dataset on classifying Morse code symbols, which highlights some interesting trends and limits of sparse connection patterns.
  • TL;DR: Neural networks can be pre-defined to have sparse connectivity without performance degradation.
  • Keywords: Machine learning, Neural networks, Sparse neural networks, Pre-defined sparsity, Scatter, Connectivity patterns, Adjacency matrix, Parameter Reduction, Morse code
0 Replies