Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
Coarse Pruning of Convolutional Neural Networks with Random Masks
Sajid Anwar, Wonyong Sung
Nov 04, 2016 (modified: Jan 18, 2017)ICLR 2017 conference submissionreaders: everyone
Abstract:The learning capability of a neural network improves with increasing depth at
higher computational costs. Wider layers with dense kernel connectivity patterns
further increase this cost and may hinder real-time inference. We propose feature
map and kernel level pruning for reducing the computational complexity of
a deep convolutional neural network. Pruning feature maps reduces the width
of a layer and hence does not need any sparse representation. Further, kernel
pruning changes the dense connectivity pattern into a sparse one. Due to coarse
nature, these pruning granularities can be exploited by GPUs and VLSI based
implementations. We propose a simple strategy to choose the least adversarial
pruning masks. The proposed approach is generic and can select good pruning
masks for feature map, kernel and intra-kernel pruning. The pruning masks are
generated randomly, and the best performing one is selected using the evaluation
set. The sufficient number of random pruning masks to try depends on the pruning
ratio, and is around 100 when 40% complexity reduction is needed. The pruned
network is retrained to compensate for the loss in accuracy. We have extensively
evaluated the proposed approach with the CIFAR-10, SVHN and MNIST datasets.
Experiments with the CIFAR-10 dataset show that more than 85% sparsity can be
induced in the convolution layers with less than 1% increase in the misclassification
rate of the baseline network.
TL;DR:This work has proposed a new pruning strategy for CNN. Further, feature map and kernel pruning granularities are proposed for good pruning ratios and simple sparse representation.
Enter your feedback below and we'll get back to you as soon as possible.