Logarithmic Unbiased Quantization: Practical 4-bit Training in Deep LearningDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: quantization, efficient training, 4 bit training
Abstract: Quantization of the weights and activations is one of the main methods to reduce the computational footprint of Deep Neural Networks (DNNs) training. Current methods enable 4-bit quantization of the forward phase. However, this constitutes only a third of the training process. Reducing the computational footprint of the entire training process requires the quantization of the neural gradients, i.e., the loss gradients with respect to the outputs of intermediate neural layers. In this work, we examine the importance of having unbiased quantization in quantized neural network training, where to maintain it, and how. Based on this, we suggest a logarithmic unbiased quantization (LUQ) method to quantize both the forward and backward phase to 4-bit, achieving state-of-the-art results in 4-bit training. For example, in ResNet50 on ImageNet, we achieved a degradation of 1.18 %; we further improve this to degradation of only 0.64 % after a single epoch of high precision fine-tuning combined with a variance reduction method. Finally, we suggest a method that exploits the low precision format by avoiding multiplications during two-thirds of the training process, thus reducing by 5x the area used by the multiplier. A reference implementation is supplied in the supplementary material.
One-sentence Summary: A practical method to train deep neural networks with 4-bit achieving state-of-the-art results
Supplementary Material: zip
26 Replies

Loading