Deep-Reinforcement-Learning-Based Distributed Computation Offloading in Vehicular Edge Computing Networks

Published: 01 Jan 2023, Last Modified: 11 May 2024IEEE Internet Things J. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vehicular edge computing has emerged as a promising paradigm by offloading computation-intensive latency-sensitive tasks to mobile-edge computing (MEC) servers. However, it is difficult to provide users with excellent Quality-of-Service (QoS) by relying only on these server resources. Therefore, in this article, we propose to formulate the computation offloading policy based on deep reinforcement learning (DRL) in a vehicle-assisted vehicular edge computing network (VAEN) where idle resources of vehicles are deemed as edge resources. Specifically, each task is represented by a directed acyclic graph (DAG) and offloaded to edge nodes according to our proposed subtask scheduling priority algorithm. Further, we formalize the computation offloading problem under the constraints of candidate service vehicle models, which aims to minimize the long-term system cost, including delay and energy consumption. To this end, we propose a distributed computation offloading algorithm based on multiagent DRL (DCOM), where an improved actor–critic network (IACN) is devised to extract features, and a joint mechanism of prioritized experience replay and adaptive $n$ -step learning (JMPA) is proposed to enhance learning efficiency. The numerical simulations demonstrate that, in VAEN scenario, DCOM achieves significant decrements in the latency and energy consumption compared with other advanced benchmark algorithms.
Loading