Energy-Efficient Collaborative Multi-Access Edge Computing via Deep Reinforcement Learning

Published: 01 Jan 2023, Last Modified: 15 May 2025IEEE Trans. Ind. Informatics 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The joint problem of task offloading, collaborative computing, and resource allocation for multi-access edge computing (MEC) is a challenging issue. In this article, splitting computing tasks at MEC servers through collaboration among MEC servers and a cloud server, we investigate the joint problem of collaborative task offloading and resource allocation. A collaborative task offloading, computing resource allocation, and subcarrier and power allocation problem in MEC is formulated. The goal is to minimize the total energy consumption of the MEC system while satisfying a delay constraint. The formulated problem is a nonconvex mixed-integer optimization problem. In order to solve the problem, we propose a deep reinforcement learning (DRL)-based bilevel optimization framework. The task offloading decision, computing collaboration decision, and power and subcarriers allocation subproblems are solved at the upper level, whereas the computing resource allocation subproblem is solved at the lower level. We combine dueling-DQN and double-DQN and add adaptive parameter space noise to improve DRL performance in MEC. Simulation results demonstrate that the proposed algorithm achieves near-optimal performance in energy efficiency and task completion rate compared with other DRL-based approaches and other benchmark schemes under various network parameter settings.
Loading