Abstract: The prohibitive energy cost of running high-performance Convolutional Neural Networks (CNNs) has been limiting their deployment on resource-constrained platforms including mobile and wearable devices. We propose a CNN for energy-aware dynamic routing, called the EnergyNet, that achieves adaptive-complexity inference based on the inputs, leading to an overall reduction of run time energy cost without noticeably losing (or even improving) accuracy. That is achieved by proposing an energy loss that captures both computational and data movement costs. We combine it with the accuracy-oriented loss, and learn a dynamic routing policy for skipping certain layers in the networks, that optimizes the hybrid loss. Our empirical results demonstrate that, compared to the baseline CNNs, EnergyNetcan trim down the energy cost up to 40% and 65%, during inference on the CIFAR10 and Tiny ImageNet testing sets, respectively, while maintaining the same testing accuracies. It is further encouraging to observe that the energy awareness might serve as a training regularization and can even improve prediction accuracy: our models can achieve 0.7% higher top-1 testing accuracy than the baseline on CIFAR-10 when saving up to 27% energy, and 1.0% higher top-5 testing accuracy on Tiny ImageNet when saving up to 50% energy, respectively.
TL;DR: This paper proposes a new CNN model that combines energy cost with a dynamic routing strategy to enable adaptive energy-efficient inference.
Keywords: Neural Network, Deep Learning, Energy Efficiency, Dynamic Routing
11 Replies
Loading