Multiagent Deep-Reinforcement-Learning-Based Cooperative Perception and Computation in VEC

Published: 2025, Last Modified: 06 Jan 2026IEEE Internet Things J. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Connected and autonomous vehicles (CAVs) are an important paradigm of intelligent transportation systems. Cooperative perception (CP) and vehicular edge computing (VEC) enhance CAVs’ perception capacity of the region of interest (RoI) while alleviating the pressure of intensive computation on onboard resources. However, existing CP and computation schemes are based on inefficient broadcast communications and still face challenges, such as highly dynamic communication link channel conditions caused by vehicle mobility, and limited computing resources in VEC environments. Considering the delay sensitivity of CAVs’ perception tasks and the need for enhanced perception, we propose a unicast-based cooperative perception and computation scheme to achieve more efficient resource utilization and perception task execution in VEC scenarios. Our goal is to maximize CP gain and minimize task execution delay by optimizing the decision of each ego CAVs. To solve the sequential decision-making problem of multiobjective optimization, we propose a solution based on improved multiagent proximal policy optimization deep reinforcement learning, where CAVs agents make adaptive decisions distributed based on partial observations. Simulation results show that compared with the baseline algorithm, our proposed scheme effectively reduces the execution delay of ego CAVs perception tasks and ensures a high perception gain.
Loading