PerforatedCNNs: Acceleration through Elimination of Redundant Convolutions

Michael Figurnov, Dmitry Vetrov, Pushmeet Kohli

Feb 15, 2016 (modified: Feb 15, 2016) ICLR 2016 workshop submission readers: everyone
  • CMT id: 36
  • Abstract: We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Inspired by the loop perforation technique from source code optimization, we speed up the bottleneck convolutional layers by skipping their evaluation in some of the spatial positions. We propose and analyze several strategies of choosing these positions. Our method allows to reduce the evaluation time of modern convolutional neural networks by 50% with a small decrease in accuracy.
  • Conflicts: skolkovotech.ru, hse.ru, cs.msu.ru, microsoft.com

Loading