HRBP: Hardware-friendly Regrouping towards Block-based Pruning for Sparse CNN Training

Published: 20 Nov 2023, Last Modified: 05 Dec 2023CPAL 2024 (Proceedings Track) OralEveryoneRevisionsBibTeX
Keywords: efficient training, sparse training, fine-grained structured sparsity, regrouping algorithm
TL;DR: This paper proposes a novel fine-grained structured pruning algorithm, which accelerates the sparse training of convolutional neural networks at both forward and backward pass.
Abstract: Pruning at initialization and training a sparse network from scratch (sparse training) become increasingly popular. However, most sparse training literature addresses only the unstructured sparsity, which in practice brings little benefit to the training acceleration on GPU due to the irregularity of non-zero weights. In this paper, we work on sparse training with fine-grained structured sparsity, by extracting a few dense blocks from unstructured sparse weights. For Convolutional Neural networks (CNN), however, the extracted dense blocks will be broken in backpropagation due to the shape transformation of convolution filters implemented by GEMM. Thus, previous block-wise pruning methods can only be used to accelerate the forward pass of sparse CNN training. To this end, we propose Hardware-friendly Regrouping towards Block-based Pruning (HRBP), where the grouping is conducted on the kernel-wise mask. With HRBP, extracted dense blocks are preserved in backpropagation. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet demonstrate that HRBP can almost match the accuracy of unstructured sparse training methods while achieving a huge acceleration on hardware. Code is available at: https://github.com/HowieMa/HRBP-pruning.
Track Confirmation: Yes, I am submitting to the proceeding track.
Submission Number: 11
Loading