Abstract: In most Internet of Vehicles (IoV) scenarios, intelligent vehicle terminals are required to cope with a multitude of heterogeneous tasks, each of which is subject to increasingly strict constraints on delay and energy consumption. Task offloading is an efficient way to tackle this issue. However, due to performance constraints, a single or two-tier offloading strategy can not enable fine-grained task allocation and flexible service deployment. To address the above problems, we propose a collaborative cloud-edge-end task offloading scheme for IoV scenarios. Since traditional single-agent Deep Reinforcement Learning (DRL) makes it difficult to coordinate multiple objectives of dynamic services simultaneously, we propose a task offloading strategy based on multiagent Deep Deterministic Policy Gradient (DDPG), to jointly consider service delay and energy consumption. We further introduce attentive experience replay (AER) to mitigate the issue of insufficient experience sampling in the DDPG algorithm caused by the catastrophic forgetting problem. Through simulation of IoV scenarios, our proposed model significantly enhances task offloading effectiveness and concurrently reduces delay by 10.6% and energy consumption by 8.1% compared to other state-of-the-art baseline algorithms.
Loading