Pruning at a Glance: A Structured Class-Blind Pruning Technique for Model CompressionDownload PDF

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: High performance of deep learning models typically comes at cost of considerable model size and computation time. These factors limit applicability for deployment on memory and battery constraint devices such as mobile phones or embedded systems. In this work we propose a novel pruning technique that eliminates entire filters and neurons according to their relative L1-norm as compared to the rest of the network, yielding more compression and decreased redundancy in the parameters. The resulting network is non-sparse, however, much more compact and requires no special infrastructure for its deployment. We prove the viability of our method by achieving 97.4%, 47.8% and 53% compression of LeNet-5, ResNet-56 and ResNet-110 respectively, exceeding state-of-the-art compression results reported on ResNet without losing any performance compared to the baseline. Our approach does not only exhibit good performance, but is also easy to implement on many architectures.
TL;DR: We propose a novel structured class-blind pruning technique to produce highly compressed neural networks.
Keywords: Pruning, Compression, Efficient Learning, Class-Blind
11 Replies

Loading