Graph Convolutional Reinforcement Learning-Guided Joint Trajectory Optimization and Task Offloading for Aerial Edge Computing
Abstract: The unique capabilities of Unmanned Aerial Vehicles (UAVs), including their superior mobility, flexibility, and line-of-sight transmission, have made them well-suited for facilitating Aerial Edge Computing (AEC). This computing paradigm is particularly beneficial for meeting the computing demands of User Equipments (UEs) in emergency situations, as it offers efficient support for task offloading. Considering the service requirements of UEs, it is essential to minimize the processing delay experienced by UEs in AEC systems. This is accomplished through the joint optimization of the UAV trajectory, flight speed, and task offloading ratio allocation for UEs. Due to the non-convex nature and the continuous action space of the problem, recent studies have turned to the Deep Deterministic Policy Gradient (DDPG) to tackle similar challenges. However, Deep Neural Networks (DNNs) employed in DDPG are limited to extracting latent information solely from Euclidean data, and are similarly constrained by the highly dynamic changes in channel states within AEC networks, thereby disregarding the valuable features inherent in the structural information. In order to alleviate the task offloading problem in AEC systems, we propose a novel Graph Convolutional Pooling-DDPG (GCP-DDPG) algorithm by exploiting the graph-based multi-relational derivation capability of the multi-Relational Graph Convolutional Network (R-GCN) and employing the reinforcement learning technique. Extensive simulation experiments are conducted to evaluate the superiority and effectiveness of the GCP-DDPG algorithm. The results demonstrate a remarkable performance improvement of 34.6% compared to state-of-the-art approaches.
External IDs:dblp:journals/tits/WuTTLJ25
Loading