Abstract: In recent years, there has been a growing focus on the development of compact and efficient network techniques in the computer vision research field. Towards this goal, this study presents KDVGG-Lite, an innovative image classification model that is inspired from the state-of-the-art VGG16 architecture. KDVGG-Lite utilizes knowledge distillation along with pruning technique from the intricate VGG16 model, attaining exceptional results on CIFAR-10 and Fashion-MNIST datasets. Knowledge distillation fusion guarantees efficient compression of essential information, while pruning enhances the model’s design for resource-constrained environments. The results demonstrate the exceptional accuracy (an increase of 11.89% and 11.28%), precision (11.51% and 10.06%), recall (11.89% and 11.28%) and F1 score (12.38% and 11.40%) achieved by KDVGG-Lite on CIFAR-10 and Fashion-MNIST respectively, despite having almost 16 times fewer parameters, surpassing its VGG16 counterpart.
Loading