Combinatorial Client-Master Multiagent Deep Reinforcement Learning for Task Offloading in Mobile Edge Computing

Published: 01 Jan 2024, Last Modified: 14 Oct 2024AAMAS 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep reinforcement learning (DRL) is gaining popularity in task-offloading problems because it can adapt to dynamic changes and minimize online computational complexity. However, the various types of continuous and discrete resource constraints on user devices (UDs) and mobile edge computing (MEC) servers pose challenges to the design of an efficient DRL-based task-offloading strategy. Existing DRL-based task-offloading algorithms focus on the constraints of the UDs, assuming the availability of enough storage resources on the server. Moreover, existing multiagent DRL (MADRL)-based task-offloading algorithms are homogeneous agents and consider homogeneous constraints as a penalty in their reward function. In this work, we propose a novel combinatorial client-master MADRL (CCM_MADRL) algorithm for task offloading in mobile edge computing (CCM_MADRL_MEC) that allows UDs to decide their resource requirements and the server to make a combinatorial decision based on the UDs' requirements. CCM_MADRL_MEC is the first MADRL approach in task offloading to consider server storage capacity in addition to the constraints of the UDs. By taking advantage of the combinatorial action selection, CCM_MADRL_MEC has shown superior convergence over existing benchmark and heuristic algorithms.
Loading