Deep reinforcement learning for optimizing computation latency in wireless-powered Multi-Access Edge Computing systems: A partial offloading approach
Abstract: The integration of wireless power transfer with multi-access edge computing (MEC) is critical for next-generation wireless networks, yet the surge in users challenges ultra-low latency. This study examines a wireless-powered MEC network that employs a partial offloading strategy. The aim of this research is to devise an online algorithm that optimally manages task offloading and resource management, adapting to dynamic channel conditions. To achieve this, we design a Deep Reinforcement Online Offloading with Two-Stage Optimization (DROO-TSO) framework. This framework is aimed at predicting partial offloading ratios and optimizing charging time and resource management. Empirical results show DROO-TSO achieves sub-millisecond execution times on both GPU and CPU platforms. Compared to DDPG-based baselines, DROO-TSO reduces the total computation delay by 21.49% while adaptively converging to environment-optimized strategies. Both algorithm runtime and the total computation delay meet stringent low-latency requirements, validating its capability in dynamic wireless-powered MEC networks.
Loading