Loss-aware Weight Quantization of Deep Networks

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization (with possibly different scaling parameters for the positive and negative weights) and arbitrary m-bit quantization. Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.
  • TL;DR: A loss-aware weight quantization algorithm that directly considers its effect on the loss is proposed.
  • Keywords: deep learning, network quantization

Loading