Building Efficient ConvNets using Redundant Feature PruningDownload PDF

02 Feb 2018 (modified: 02 Feb 2018)ICLR 2018 Workshop SubmissionReaders: Everyone
Abstract: This paper presents an efficient technique to prune deep and/or wide convolutional neural network models by eliminating redundant features (or filters). Previous studies have shown that over-sized deep neural network models tend to produce a lot of redundant features that are either shifted version of one another or are very similar and show little or no variations; thus resulting in filtering redundancy. We propose to prune these redundant features along with their connecting feature maps according to their differentiation and based on their relative cosine distances in the feature space, thus yielding smaller network size with reduced inference costs and competitive performance. We empirically show on select models and CIFAR-10 dataset that inference costs can be reduced by 40% for VGG-16, 27% for ResNet-56, and 39% for ResNet-110.
Keywords: Network compression, redundant feature pruning, filter pruning
4 Replies

Loading