Conditional computation in neural networks for faster models

Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup

Feb 15, 2016 (modified: Feb 15, 2016) ICLR 2016 workshop submission readers: everyone
  • CMT id: 181
  • Abstract: Deep learning has become the state-of-art tool in many applications, but the evaluation of expressive deep models can be unfeasible on resource-constrained devices. The conditional computation approach has been proposed to tackle this problem (Bengio et al., 2013; Davis & Arel, 2013). It operates by selectively activating only parts of the network at a time. We propose to use reinforcement learning as a tool to optimize conditional computation policies. More specifically, we cast the problem of learning activation-dependent policies for dropping out blocks of units as reinforcement learning. We propose a learning scheme motivated by computation speed, capturing the idea of wanting to have parsimonious activations while maintaining prediction accuracy. We apply a policy gradient algorithm for learning policies that optimize this loss function and propose a regularization mechanism that encourages diversification of the dropout policy. We present encouraging empirical results showing that this approach improves the speed of computation without impacting the quality of the approximation.
  • Conflicts: mcgill.ca

Loading