Skeleton Neural Networks via Low-Rank Guided Filter Pruning

Published: 01 Jan 2023, Last Modified: 20 May 2025IEEE Trans. Circuits Syst. Video Technol. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Filter pruning is one of the most popular approaches for compressing convolutional neural networks (CNNs). The most critical task in pruning is to evaluate the importance of each convolutional filter, such that the less important filters can be removed while the overall model performance is minimally affected. In each layer, some filters may be linearly dependent on each other, which means that they have replaceable information. Redundant information can be removed without significantly affecting information richness and model performance. In this paper, we propose a novel low-rank guided pruning scheme to obtain skeleton neural networks by alternatively training and pruning CNNs. In each step, training is performed with nuclear-norm regularization to low-rank the filters in each layer, followed by filter pruning to maintain the information richness via the maximally linearly independent subsystem. A novel “smaller-norm-and-linearly-dependent-less-important” pruning criterion is proposed to compress the model. The training and pruning processes can be repeated until the model is fully trained. To investigate the performance, we applied the proposed joint training and pruning scheme to train the CNNs for image classification. We considered three benchmark datasets: MNIST, CIFAR-10 and ILSVRC-2012. The proposed method successfully achieved a higher pruning rate and better classification performance compared to state-of-the-art compression methods.
Loading