Compressing Neural Networks: Towards Determining the Optimal Layer-wise DecompositionDownload PDF

Published: 09 Nov 2021, Last Modified: 25 Nov 2024NeurIPS 2021 PosterReaders: Everyone
Keywords: neural networks, compression, low-rank decomposition, matrix factorization, pruning
TL;DR: An optimization-based approach to determine the low-rank decomposition of each layer for efficient neural networks.
Abstract: We present a novel global compression framework for deep neural networks that automatically analyzes each layer to identify the optimal per-layer compression ratio, while simultaneously achieving the desired overall compression. Our algorithm hinges on the idea of compressing each convolutional (or fully-connected) layer by slicing its channels into multiple groups and decomposing each group via low-rank decomposition. At the core of our algorithm is the derivation of layer-wise error bounds from the Eckart–Young–Mirsky theorem. We then leverage these bounds to frame the compression problem as an optimization problem where we wish to minimize the maximum compression error across layers and propose an efficient algorithm towards a solution. Our experiments indicate that our method outperforms existing low-rank compression approaches across a wide range of networks and data sets. We believe that our results open up new avenues for future research into the global performance-size trade-offs of modern neural networks.
Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
Supplementary Material: pdf
Code: https://github.com/lucaslie/torchprune
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/compressing-neural-networks-towards/code)
22 Replies

Loading