Neural Network Optimization with Weight Evolution

Published: 11 Jul 2023, Last Modified: 14 Jul 2023NCW ICML 2023EveryoneRevisionsBibTeX
Keywords: pruning, compression, optimization, weight evolution
TL;DR: Optimization of neural networks using evolution of weughts
Abstract: In contrast to magnitude pruning, which only checks the parameter values at the end of training and removes the insignificant ones, this paper introduces a new approach that estimates the importance of each parameter in a holistic way. The proposed method keeps track of the parameter values from the beginning until the last epoch and calculates a weighted average across the training, giving more weight to the parameter values closer to the completion of training. We have tested this method on popular deep neural networks like AlexNet, VGGNet, ResNet and DenseNet on benchmark datasets like CIFAR10 and Tiny ImageNet. The results show that our approach can achieve higher compression with less loss of accuracy compared to magnitude pruning.
Submission Number: 13
Loading