PRUNING IN TRAINING: LEARNING AND RANKING SPARSE CONNECTIONS IN DEEP CONVOLUTIONAL NETWORKSDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: This paper proposes a Pruning in Training (PiT) framework of learning to reduce the parameter size of networks. Different from existing works, our PiT framework employs the sparse penalties to train networks and thus help rank the importance of weights and filters. Our PiT algorithms can directly prune the network without any fine-tuning. The pruned networks can still achieve comparable performance to the original networks. In particular, we introduce the (Group) Lasso-type Penalty (L-P /GL-P), and (Group) Split LBI Penalty (S-P / GS-P) to regularize the networks, and a pruning strategy proposed is used in help prune the network. We conduct the extensive experiments on MNIST, Cifar-10, and miniImageNet. The results validate the efficacy of our proposed methods. Remarkably, on MNIST dataset, our PiT framework can save 17.5% parameter size of LeNet-5, which achieves the 98.47% recognition accuracy.
Keywords: Split LBI, sparse penalty, network pruning, feature selection
TL;DR: we propose an algorithm of learning to prune network by enforcing structure sparsity penalties
Data: [MNIST](https://paperswithcode.com/dataset/mnist)
4 Replies

Loading