Energy-Efficiency Optimization in RIS-Assisted AAV Communications Based on Deep Reinforcement Learning

Published: 01 Jan 2025, Last Modified: 16 May 2025IEEE Internet Things J. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Reconfigurable-intelligent-surface (RIS)-assisted autonomous aerial vehicles (AAVs) communications technology improves energy efficiency by reflecting signals. This article utilizes RIS and deep reinforcement learning (DRL) to optimize the scheduling of ground terminals (GTs), AAV trajectories, resource allocation, and time slot lengths to maximize system energy efficiency. Three flaws of the existing DRL algorithm are also addressed to seek higher energy efficiency further. First, DRL faces exploration challenges due to the complexity of the solution space, resulting in low rewards. We propose the ant colony DRL (ACDRL) algorithm, which optimizes the scheduling order of the GTs using the ant colony optimization (ACO) algorithm and feeds the results back to the DRL to optimize the subsequent decision making, thus reducing the exploration overhead. Second, to reduce the degree of local optimization when dealing with hybrid action space planning, we propose a hybrid discrete-continuous DRL (HDCDRL) algorithm to improve action accuracy. Finally, to better generalize the model to similar tasks, we propose the transfer-DRL (T-DRL) model to reduce the training time when the task changes. Experimental results show that our proposed solution outperforms the benchmark solution.
Loading