A Contextual Discretization framework for compressing Recurrent Neural Networks

Aidan Clark, Vinay Uday Prabhu, John Whaley

Feb 17, 2017 (modified: Feb 17, 2017) ICLR 2017 workshop submission readers: everyone
  • Abstract: In this paper, we address the issue of training Recurrent Neural Networks with binary weights and introduce a novel Contextualized Discretization (CD) framework and showcase its effectiveness across multiple RNN architectures and two disparate tasks. We also propose a modified GRU architecture that allows harnessing the CD method and reclaim the exclusive usage of weights in $\{-1, 1\}$, which in turn reduces the number of power-two bit multiplications from $O(n^3)$ to $O(n^2)$.
  • TL;DR: A new technique for weight binarized compression of Recurrent Neural Networks
  • Conflicts: none@none.com
  • Keywords: Deep learning, Supervised Learning, Applications

Loading