Deep Learning Models Compression Based on Evolutionary Algorithms and Digital Fractional Differentiation

Published: 01 Jan 2023, Last Modified: 13 Nov 2024CEC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Neural Networks (NNs) have shown excellent results in a variety of Machine Learning (ML) tasks and are now being used in a wide range of applications. Meanwhile, the size of these models has increased, with some of the most recent state-of-the-art models comprising billions of parameters. The necessity for compact and efficient NN representations has been recognized by research, which has provided specialized compression techniques for various applications. In this paper, we propose the optimization of the compression based on evolutionary algorithms and fractional differentiation. To this end, three main criteria were taken into account: filters error approximation using fractional differentiation, NNs accuracy and the Compression ratio. The results obtained on LeNet5 model demonstrated that there is no loss in terms of accuracy with 50% compression ratio in MNIST, 18% in CIFAR10, and 9% in CIFAR100. Moreover, the analysis showed that kernels of the first layer are more complex to be compressed than those of the second layer.
Loading