NFAERCOM: A Near-Far Area Experience Replay-based Computation Offloading Method

Published: 01 Jan 2024, Last Modified: 06 Jun 2025ISPA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Computation offloading technology plays an important role in Mobile Edge Computing (MEC). Most mainstream Deep Reinforcement Learning (DRL)-based computation offloading methods employ random experience replay to train networks. This training method does not take into account the value differences between experiences, resulting in the decrease of training speeds and the increase of task completion delay, energy consumption, and task drop rate. Prioritized Experience Replay (PER) alleviates this issue to some extent. However, using Temporal Difference (TD) error as the criterion for the importance of experiences does not allow for the selection of experiences that are more "concerned" by the Actor network under the Actor-Critic framework. This limitation restricts the performance of the algorithm. To address these issues, this paper focuses on the computation offloading problem in scenarios with multiple mobile devices (MDs) and multiple MEC servers. A near-far area experience replay algorithm-based computation offloading method named NFAERCOM is proposed. NFAERCOM additionally considers the queuing delay at the MEC server and introduces a new near-far area experience replay algorithm. Evaluation results demonstrate that NFAERCOM effectively reduces the task completion delay, energy consumption, and task drop rate of tasks.
Loading