Attention-Augmented MADDPG in NOMA-Based Vehicular Mobile Edge Computational Offloading

Published: 2024, Last Modified: 02 Aug 2025IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Vehicular mobile edge computing (vMEC) and nonorthogonal multiple access (NOMA) have emerged as promising technologies for enabling low-latency and high-throughput applications in vehicular networks. In this article, we propose a novel multiagent deep deterministic policy gradient (MADDPG) approach for resource allocation in NOMA-based vMEC systems. Our approach leverages deep reinforcement learning (DRL) to enable vehicles to offload computation-intensive tasks to nearby edge servers, optimizing resource allocation decisions while ensuring low-latency communication. We introduce an attention mechanism within the MADDPG model to dynamically focus on relevant information from the input state and joint actions, enhancing the model’s predictive accuracy. Additionally, we propose an attention-based experience replay method to expedite network convergence. The simulation results highlight the effectiveness of multiagent reinforcement learning (MARL) algorithms, such as MADDPG with attention, in achieving better convergence and performance in various scenarios. The influence of different model parameters, such as input data volumes, task load levels, and resource configurations, on optimization results is also evident. The decision making processes of agents are dynamic and depend on factors specific to the task and environment.
Loading