Universal Deep Neural Network CompressionDownload PDF

Published: 07 Nov 2018, Last Modified: 05 May 2023NIPS 2018 Workshop CDNNRIA Blind SubmissionReaders: Everyone
Abstract: In this paper, we investigate lossy compression of deep neural networks (DNNs) by weight quantization and lossless source coding for memory-efficient deployment. Whereas the previous work addressed non-universal scalar quantization and entropy coding of DNN weights, we for the first time introduce universal DNN compression by universal vector quantization and universal source coding. In particular, we examine universal randomized lattice quantization of DNNs, which randomizes DNN weights by uniform random dithering before lattice quantization and can perform near-optimally on any source without relying on knowledge of its probability distribution. Moreover, we present a method of fine-tuning vector quantized DNNs to recover the performance loss after quantization. Our experimental results show that the proposed universal DNN compression scheme compresses the 32-layer ResNet (trained on CIFAR-10) and the AlexNet (trained on ImageNet) with compression ratios of $47.1$ and $42.5$, respectively.
TL;DR: We introduce the universal deep neural network compression scheme, which is applicable universally for compression of any models and can perform near-optimally regardless of their weight distribution.
Keywords: Deep neural networks, lossy compression, universal compression, entropy coded vector quantization, universal quantization
5 Replies

Loading