Deep Reinforcement Learning for Rechargeable AAV-Assisted Data Collection From Dense Mobile Sensor Nodes
Abstract: In the realm of the Internet of Things (IoT), Autonomous Aerial Vehicles (AAVs) have garnered significant attention due to their high mobility and cost-effectiveness. However, the limited onboard energy, kinematic constraints, and highly dynamic environments present significant challenges for AAVs in the context of continuous real-time data collection scenarios. To address this issue, we investigate the utilization of a rechargeable AAV for data collection tasks in scenarios with densely mobile sensor nodes. This study formulates the problem as a Markov decision process and designs a reinforcement learning approach called guided search twin-dueling-double deep Q-Network (GS-TD3QN). Within this framework, the goal is to optimize the flight path, charging strategy, and data upload intervals to collectively maximize the total number of uploaded data packets, improve energy efficiency, and minimize the average age of information. Additionally, we propose an action filter to mitigate collision risks and explore various scheduling strategies. Ultimately, by evaluating the performance with simulation results, we confirm the effectiveness of the proposed algorithm and validate its applicability across varying quantities of nodes.
Loading