Data Collection Maximization with V-DQN in UAV-Assisted Wireless Sensor Networks: A Deep Reinforcement Learning Approach
Abstract: Unmanned Aerial Vehicles (UAVs) possess advantages such as high-quality channel capabilities and high mobility, making them widely used in assisting wireless sensor networks (WSNs) for environmental monitoring, smart medical care, etc. However, wireless sensor networks face energy constraints. In order to address this challenge, a novel one-hop task scheduling in wireless sensor networks is proposed to maximize the data value based on one-hop task scheduling. Then we formulated it as a Constrained Markov Decision Process, aiming to maximize the data value of all the cluster heads under multi-dimensional constraints; Furthermore, a robust algorithm V-DQN (Value-deep Q-learning) based on deep reinforcement learning (DRL) is proposed to help the Q-value table out with the curse of dimensionality, and enhance data collection efficiency. Simulation results demonstrate that our work surpasses MCTS-based by 4.2%, and notably outperforms DTS-UAV, Greedy, Random about 30.12%, 62.15%, 82.28%, respectively. Furthermore, it surpasses BPSO by approximately 38. 64% compared to its scenario, demonstrating a data collection efficiency of approximately 93.8%.
External IDs:dblp:conf/ijcnn/JinL25
Loading