Workflow offloading for energy minimization under deep reinforcement learning

Published: 2025, Last Modified: 21 Jan 2026Computing 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the increasing number of computationally intensive workflows that cloud, edge, and end devices need to process to optimize the overall energy consumption of the system for workflows, lots of energy consumption has been consumed. Given the dynamic nature of data sizes within workflows, data compression techniques are employed to curtail transmission energy consumption or offload tasks directly to edge and cloud devices for execution. It is difficult to select suitable devices for these workflows to minimize energy consumption. To address the dynamic variations in task data of workflows and the heterogeneous characteristics of resources, we propose a novel offloading scheme based on deep reinforcement learning (DRL). We first propose a task priority algorithm that considers energy consumption as a key factor. Subsequently, we construct a mathematical model based on the Markov Decision Process (MDP), which considers both task and system states to minimize overall system energy consumption. Finally, we employ the deep Q network algorithm from deep reinforcement learning to train the proposed MDP model where the experience pool replacement algorithm of the deep Q network algorithm is enhanced for improved learning efficiency. To validate the proposed approach, we conduct experiments to fine-tune algorithm parameters and assess the significant benefits of data compression in energy consumption optimization. Compared with other algorithms, the proposed algorithm can execute the same workflow applications with lower energy consumption and acceptable makespan.
Loading