Filter Combination Learning for Convolutional Neural Network

Published: 2020, Last Modified: 18 May 2024ICTC 2020EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we propose a method for representing convolution filters of a Convolutional Neural Network (CNN) model as linear combinations of small number of basis filters that are provided as input features. In our approach, the combination coefficients are searched (trained) with the given input basis filters (IBFs) to best generate the convolution filter parameters. Since all the convolution filters are generated by the linear combinations of the IBFs, the size of an CNN model can be compressed if the number of coefficients for the linear combinations is less than that of filter parameters. In our experiments, widely used well-known deep learning models such as VGG-16, ResNet-18 are utilized. About 70% of the learnable parameters in those models are reduced with the observation of 1-2% accuracy drops. Note that the aim of our approach is not for reducing the number of parameters and involved arithmetic operations. Our primary goal of work is to investigate the possibility of expressing filters with a small set of IBFs by linear combinations. The second goal is to compress a model by linear combinations and to make a benefit when the model needs to be distributed and stored (particularly downloaded to mobile devices through Wifi). The code for experiments are available at https://github.com/jjeamin/Filter_Generation_Network.
Loading