ProxQuant: Quantized Neural Networks via Proximal OperatorsDownload PDF

27 Sept 2018, 22:38 (modified: 10 Feb 2022, 11:32)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Keywords: Model quantization, Optimization, Regularization
TL;DR: A principled framework for model quantization using the proximal gradient method, with empirical evaluation and theoretical convergence analyses.
Abstract: To make deep neural networks feasible in resource-constrained environments (such as mobile devices), it is beneficial to quantize models by using low-precision weights. One common technique for quantizing neural networks is the straight-through gradient method, which enables back-propagation through the quantization mapping. Despite its empirical success, little is understood about why the straight-through gradient method works. Building upon a novel observation that the straight-through gradient method is in fact identical to the well-known Nesterov’s dual-averaging algorithm on a quantization constrained optimization problem, we propose a more principled alternative approach, called ProxQuant , that formulates quantized network training as a regularized learning problem instead and optimizes it via the prox-gradient method. ProxQuant does back-propagation on the underlying full-precision vector and applies an efficient prox-operator in between stochastic gradient steps to encourage quantizedness. For quantizing ResNets and LSTMs, ProxQuant outperforms state-of-the-art results on binary quantization and is on par with state-of-the-art on multi-bit quantization. We further perform theoretical analyses showing that ProxQuant converges to stationary points under mild smoothness assumptions, whereas variants such as lazy prox-gradient method can fail to converge in the same setting.
Code: [![github](/images/github_icon.svg) allenbai01/ProxQuant](
Data: [Penn Treebank](
15 Replies