Discrepancy-Based Knowledge Distillation for Image Classification Restoration

Published: 01 Jan 2024, Last Modified: 15 May 2025ICMLA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper introduces knowledge distillation to re-store compressed neural networks on image classification tasks. Rather than focusing on accuracy, it adopts discrepancy as the main metric for compressed neural network performance evaluation. We modify the hard target in the knowledge distillation to address the discrepancy issue during restoration. We utilize MNIST and CIFAR10 datasets to generate compressed neural networks and restore networks using our knowledge distillation method to outperform those using cross-entropy, achieving up to a 5% reduction in performance loss. Furthermore, we discuss the impact of the choice of hyperparameters on discrepancy restoration. Our new knowledge distillation approach brings up a discrepancy-based restoration method that improves the compressed neural network discrepancy performance.
Loading