Toward Energy-Efficiency: Integrating MATD3 Reinforcement Learning Method for Computational Offloading in RIS-Aided UAV-MEC Environments
Abstract: With the proliferation of Internet of Things (IoT) devices, there is an escalating demand for enhanced computing and communication capabilities. Mobile edge computing (MEC) addresses this need by relocating computing resources to the network edge, thereby delivering swifter and more efficient services. This article introduces a computation offloading and energy consumption optimization framework that leverages Reconfigurable intelligent surfaces (RIS), uncrewed aerial vehicles (UAVs), and MEC. The scheme aims to maximize energy efficiency through the optimization of task allocation, RIS phase shifts, and UAV trajectories. By employing the Multiagent twin delayed deep deterministic policy gradient (MATD3) reinforcement learning algorithm, this article further refines UAV trajectories and RIS configurations. The simulation results indicate that the proposed method surpasses the traditional Concave-convex procedure (CCCP) algorithm in both UAV trajectory control and RIS configuration, demonstrating quicker convergence and enhanced stability. The method proves to be adaptable to diverse environments and tasks, showcasing notable benefits in RIS-assisted interference suppression, particularly with large RIS, thereby enhancing UAV data reception rates. Additionally, MATD3 exhibits faster and smoother convergence for extended task durations and smaller RIS scenarios. Simulation results reveal that UAVs tend to move closer to RIS, with energy efficiency falling as IoT tasks increase, affirming the proposed algorithm’s high energy efficiency and effectiveness.
Loading