Differentiable Model Compression via Pseudo Quantization Noise

Published: 07 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: We propose DiffQ a differentiable method for model compression for quantizing model parameters without gradient approximations (e.g., Straight Through Estimator). We suggest adding independent pseudo quantization noise to model parameters during training to approximate the effect of a quantization operator. DiffQ is differentiable both with respect to the unquantized weights and the number of bits used. Given a single hyper-parameter balancing between the quantized model size and accuracy, DiffQ optimizes the number of bits used per individual weight or groups of weights, in end-to-end training. We experimentally verify that our method is competitive with STE based quantization techniques on several benchmarks and architectures for image classification, language modeling, and audio source separation. For instance, on the ImageNet dataset, DiffQ compresses a 12 layers transformer-based model by more than a factor of 8, (lower than 4 bits precision per weight on average), with a loss of 0.3\% in model accuracy. Code is available at github.com/facebookresearch/diffq
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We have rephrased some of the claims according to the reviewers requests. We changed some of the legends (Figure 3). We fixed typos. We added a section to discuss the impact on the runtime of our method. Edit: We uploaded the new version of the paper in the wrong field, now updated. Added sentence to say we use gaussian noise in practice in two places: when we introduce it, and at the top of the Results section. We added one sentence in the first paragraph of the intro to highlight that we aim at providing small models rather than seed up computation.
Code: https://github.com/facebookresearch/diffq/
Assigned Action Editor: ~Brian_Kingsbury1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 205
Loading