Neural Networks with Block Diagonal Inner Product Layers

Amy Nesky, Quentin Stout

Feb 15, 2018 (modified: Oct 26, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Artificial neural networks have opened up a world of possibilities in data science and artificial intelligence, but neural networks are cumbersome tools that grow with the complexity of the learning problem. We make contributions to this issue by considering a modified version of the fully connected layer we call a block diagonal inner product layer. These modified layers have weight matrices that are block diagonal, turning a single fully connected layer into a set of densely connected neuron groups. This idea is a natural extension of group, or depthwise separable, convolutional layers applied to the fully connected layers. Block diagonal inner product layers can be achieved by either initializing a purely block diagonal weight matrix or by iteratively pruning off diagonal block entries. This method condenses network storage and speeds up the run time without significant adverse effect on the testing accuracy, thus offering a new approach to improve network computation efficiency.
  • TL;DR: We look at neural networks with block diagonal inner product layers for efficiency.
  • Keywords: Deep Learning, Neural Networks
0 Replies