Neural expressiveness for beyond importance model compression

15 May 2024 (modified: 06 Nov 2024)Submitted to NeurIPS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: model compression, efficient deep learning, pruning
TL;DR: We propose Neural Expressiveness as a weight importance independent and data-agnostic criterion for model compression that enables highly efficient neural network structures.
Abstract: Neural Network Pruning has been established as driving force in the exploration of memory and energy efficient solutions with high throughput both during training and at test time. In this paper, we introduce a novel criterion for model compression, named "Expressiveness". Unlike existing pruning methods that rely on the inherent "Importance" of neurons' and filters' weights, "Expressiveness" emphasizes a neuron's or group of neurons ability to redistribute informational resources effectively, based on the overlap of activations. This characteristic is strongly correlated to a network's initialization state, establishing criterion autonomy from the learning state ($\textit{stateless}$) and thus setting a new fundamental basis for the expansion of compression strategies in regards to the "When to Prune" question. We show that expressiveness is effectively approximated with arbitrary data or limited dataset's representative samples, making ground for the exploration of $\textit{Data-Agnostic strategies}$. Our work also facilitates a "hybrid" formulation of expressiveness and importance-based pruning strategies, illustrating their complementary benefits and delivering up to 10$\times$ extra gains w.r.t. weight-based approaches in parameter compression ratios, with an average of 1\% in performance degradation. We also show that employing expressiveness (independently) for pruning leads to an improvement over top-performing and foundational methods in terms of compression efficiency. Finally, on YOLOv8, we achieve a 46.1\% MACs reduction by removing 55.4\% of the parameters, with an increase of 3\% in the mean Absolute Precision ($mAP_{50-95}$) for object detection on COCO dataset.
Primary Area: Optimization for deep networks
Submission Number: 16819
Loading