Abstract: Multi-access edge computing (MEC) is seen as an effective technique for decreasing service latency in a V2X network by offloading computational activities. With MEC, a three-tier offloading architecture can be developed, where a vehicle can offload computational tasks to a cloud by communicating with a base station (gNB) and Road Side Units (RSUs). In this paper, we focus on three-tier V2X networks which rely on three offloading paths: Vehicle-to-Infrastructure (i.e., vehicle to RSU), Vehicle-to-Cloud (i.e., vehicle to gNB), and Infrastructure-to-Cloud (i.e., RSU to gNB). We propose an offloading strategy based on deep reinforcement learning with the goal of reducing the average latency of tasks. To be more specific, we leverage the Deep Q Network to estimate the goodness of action-state value to determine the offloading decision. We also propose a novel exploration scheme and a new model training strategy. The experimental findings indicate that our proposed offloading method outperforms the state-of-the-art, particularly in critical circumstances characterized by a high rate of vehicle arrival or packet generation.
0 Replies
Loading