Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
WSNet: Learning Compact and Efficient Networks with Weight Sampling
Nov 07, 2017 (modified: Nov 07, 2017)ICLR 2018 Conference Blind Submissionreaders: everyoneShow Bibtex
Abstract: We present a novel network architecture for learning compact and efficient deep neural networks, where the weights of convolution filters fully connected layers, instead of learned independently, are sampled from a compact set of parameters that enforces weight sharing in the learning process. Specifically, we consider learning compact and efficient 1D convolutional neural networks for audio classification in this work. We show that our novel way of weight sampling allows not only weight sharing but also computation sharing, and therefore, we can learn much smaller and efficient but yet competitive networks compared to baseline networks with same number of convolution filters. Extensive experiments on multiple audio classification datasets verify the effectiveness of our approach. Combining weight quantization, we demonstrate that we can learn models that are up to 180$\times$ smaller than the baselines without noticeable performance drop.
TL;DR:We present a novel network architecture for learning compact and efficient deep neural networks
Keywords:Deep learning, model compression
Enter your feedback below and we'll get back to you as soon as possible.