PerforatedCNNs: Acceleration through Elimination of Redundant ConvolutionsDownload PDF

16 Apr 2024 (modified: 15 Feb 2016)ICLR 2016 workshop submissionReaders: Everyone
CMT Id: 36
Abstract: We propose a novel approach to reduce the computational cost of evaluation of convolutional neural networks, a factor that has hindered their deployment in low-power devices such as mobile phones. Inspired by the loop perforation technique from source code optimization, we speed up the bottleneck convolutional layers by skipping their evaluation in some of the spatial positions. We propose and analyze several strategies of choosing these positions. Our method allows to reduce the evaluation time of modern convolutional neural networks by 50% with a small decrease in accuracy.
Conflicts: skolkovotech.ru, hse.ru, cs.msu.ru, microsoft.com
0 Replies

Loading