Fast On-the-fly Retraining-free Sparsification of Convolutional Neural NetworksDownload PDF

Amir H. Ashouri, Tarek Abdelrahman, Alwyn Dos Remedios

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: Modern Convolutional Neural Networks (CNNs) are complex, encompassing millions of parameters. Their deployment exerts computational, storage and energy demands, particularly on embedded platforms. Existing approaches to prune or sparsify CNNs require retraining to maintain inference accuracy. Such retraining is not feasible in some contexts. In this paper, we explore the sparsification of CNNs by proposing three model-independent methods. Our methods are applied on-the-fly and require no retraining. We show that the state-of-the-art models' weights can be reduced by up to 73% (compression factor of 3.7x) without incurring more than 5% loss in Top-5 accuracy. Additional fine-tuning gains only 8% in sparsity, which indicates that our fast on-the-fly methods are effective.
TL;DR: In this paper, we develop fast retraining-free sparsification methods that can be deployed for on-the-fly sparsification of CNNs in many industrial contexts.
Keywords: sparsity, pruning, convolutional neural networks, cnn, dnn, deep learning
8 Replies

Loading