Deep Reinforcement Learning-Based Adaptive Offloading Algorithm for Wireless Power Transfer-Aided Mobile Edge Computing

Published: 01 Jan 2024, Last Modified: 08 Apr 2025WCNC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Mobile Edge Computing (MEC), as a real-time computing paradigm extended to the network edge, has been widely adopted. In recent years, Wireless Power Transfer-Aided Mobile Edge Computing (WPT-MEC) has garnered significant attention. However, it faces challenges in formulating effective offloading strategies and optimally allocating electrical energy resources. Existing solutions exhibit certain limitations, such as heuristic methods, incurring high computational complexity and struggle to adapt to dynamic environments. Although Deep Reinforcement Learning (DRL) overcomes the drawbacks of heuristic algorithms, it requires extensive time and training data. To address these issues, this paper proposes a DRL-Based Adaptive Offloading algorithm for WPT-MEC, termed as DRL-Based Adaptive Offloading (DRLAO) algorithm. This algorithm is able to dynamically adapt to environmental changes, make decisions rapidly, and adjust parameters in real-time. The DRLAO algorithm is comprised of three components: Augmented Deep Neural Network (AugDNN), Order-Preserving Quantization (KOQ) for addressing offloading decision-making, and Modi-fied Secant Method (MSM) for manipulating electrical energy resource allocation. The DRLAO algorithm achieves optimal performance of over 98% in different numbers of Wireless Edge Devices (WEDs) with lower CPU latency, and outperforms the baseline algorithm in terms of effectiveness and performance. In addition, it is able to quickly adapt and converge with minimal oscillation in dynamic environments. The source code is available at https://github.com/Aurora001226IDRLAO.
Loading