Accelerating Convolutional Neural Networks with Dominant Convolutional Kernel and Knowledge Pre-regressionOpen Website

2016 (modified: 11 Nov 2022)ECCV (8) 2016Readers: Everyone
Abstract: Aiming at accelerating the test time of deep convolutional neural networks (CNNs), we propose a model compression method that contains a novel dominant kernel (DK) and a new training method called knowledge pre-regression (KP). In the combined model DK $$^2$$ PNet, DK is presented to significantly accomplish a low-rank decomposition of convolutional kernels, while KP is employed to transfer knowledge of intermediate hidden layers from a larger teacher network to its compressed student network on the basis of a cross entropy loss function instead of previous Euclidean distance. Compared to the latest results, the experimental results achieved on CIFAR-10, CIFAR-100, MNIST, and SVHN benchmarks show that our DK $$^2$$ PNet method has the best performance in the light of being close to the state of the art accuracy and requiring dramatically fewer number of model parameters.
0 Replies

Loading