Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training

Published: 01 Jan 2023, Last Modified: 15 May 2025ICIAP (2) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose EAT, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the \(\ell _0\) norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm EAT is able to train networks with a better trade-off between classification performance and energy efficiency.
Loading