Less-Energy-Usage Network with Batch Power Iteration

Published: 11 Jul 2023, Last Modified: 11 Jul 2023NCW ICML 2023EveryoneRevisionsBibTeX
Keywords: energy consumption, carbon footprint, neural network
Abstract: Large scale neural networks are among the mainstream tools of modern big data analytic. But their training and inference phase are accompanied by huge energy consumption and carbon footprint. The energy efficiency, running time complexity and model storage size are three major considerations of using deep neural networks in modern applications. Here we introduce Less-Energy-Usage Network, or LEAN. Different from regular network compression (e.g. pruning and knowledge distillation) that transform a pre-trained huge network to a smaller network, our method is to build a lean and effective network during training phase. It is based on spectral theory and batch power iteration learning. This technique can be applied to almost any type of neural networks to reduce their sizes. Preliminary experiment results show that our LEAN consumes 30% less energy, achieving 95% of the baseline accuracy with 1.5X speed-up and 90% less parameters compared against the baseline CNN model
Submission Number: 5
Loading