Abstract: With the development of the Internet of Things, research on edge computing has surged. The essence of edge computing is to bring processing closer to data sources, aiming to minimize latency and enhance efficiency. However, resource constraints, network bandwidth limitations, and dynamic demands in edge computing present optimization challenges. Traditional reinforcement learning methods require manual feature engineering and can’t automatically learn advanced features, making them unsuitable for high-dimensional states and complex decision-making. To address these challenges, this paper investigates above question in edge networks, developing a model based on Multi-Agent Deep Q-Learning (MA-DQN). It introduces a self-learning offloading strategy where each user acts independently, observes its local environment, and optimally offloads without knowing other users’ conditions. Simulation results demonstrate that this network minimizes system utility, approaching the optimal solution.
Loading