Robust pruning for efficient CNNsOpen Website

2020 (modified: 08 Apr 2024)Pattern Recognit. Lett. 2020Readers: Everyone
Abstract: Highlights • Promising pruning methods utilize empirical classification loss to remove parameters of CNNs. • Empirical classification loss, however, might be vulnerable to outlier samples. • We propose a novel pruning criterion to be robust against the outlier samples. • Our method also provides stable metric for parameters. • Experiment results demonstrate favorable performance of our method in comparison with the other pruning methods. Abstract Deep convolutional neural network (CNN) with considerable number of parameters is one of the promising methods for image recognition. There, however, is generally difficult in applying deep CNNs to resource constrained devices due to the heavy computational burden. For reducing computational cost of CNNs while retaining the classification performance, it is effective to apply pruning methods that remove from CNNs redundant parameters less contributing to classification. The contribution of parameters can be estimated by the empirical classification loss computed over training samples to which ground-truth labels are assigned. The empirical classification loss, however, might be vulnerable to the outlier samples and/or the hard ones that are difficult to classify, and thus the pruning would accordingly be degraded. In this paper, we propose a pruning method based on a novel criterion to measure the redundancy of the parameters in CNNs through empirical classification loss. We start with the Taylor expansion of the loss function and then derive the mathematical formulation of the pruning criterion so as to be robust against some sort of outlier samples. The proposed pruning criterion can also provide stable metric for parameters and evaluate layers of various depth fairly without biases toward shallower or deeper layers. In addition, we present an effective method to normalize the criterion scores for further improving performance. In the experiments on image classification, our method exhibits favorable performance compared with the other methods.
0 Replies

Loading