Additive Powers-of-Two Quantization: An Efficient Non-uniform Discretization for Neural Networks

Sep 25, 2019 Blind Submission readers: everyone Show Bibtex
  • Abstract: We propose Additive Powers-of-Two (APoT) quantization, an efficient non-uniform quantization scheme that attends to the bell-shaped and long-tailed distribution of weights and activations in neural networks. By constraining all quantization levels as a sum of several Powers-of-Two terms, APoT quantization en-joys the overwhelming efficiency of computation and a good match with weights’distribution. A simple reparameterization on the clipping function is applied to generate a better-defined gradient for updating of optimal clipping threshold. Moreover, weight normalization is presented to refine the input distribution of weights to be more stable and consistent. Experimental results show that our proposed method outperforms state-of-the-art methods, and is even competitive with the full-precision models demonstrating the effectiveness of our proposed APoTquantization. For example, our 4-bit quantized ResNet-50 on ImageNet achieves76.8% top-1 accuracy without bells and whistles, meanwhile, our model is capable to decrease 22% fixed point computation overhead in uniformly quantized counterpart.
  • Keywords: Quantization, Efficient Inference, Neural Networks
  • Code: https://github.com/yhhhli/APoT_Quantization
  • Original Pdf:  pdf
0 Replies

Loading