Task Prioritization in Multiagent Environments: A Novel Approach Using Nash Q-Learning

Published: 2025, Last Modified: 12 Nov 2025IEEE Trans. Consumer Electron. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: With the rapid proliferation of smartphones and various terminal devices, edge computing has gained increasing importance in overcoming computational limitations and enhancing service quality through task offloading. However, most existing approaches handle tasks on a first-come, first-served basis, neglecting task prioritization, which is critical in real-world scenarios where tasks have different levels of urgency and resource requirements. This paper addresses the challenge of effective task prioritization in multi-agent task offloading by proposing a Nash Q-learning-based strategy. Our methodology involves optimizing task allocation decisions using Nash Q-learning to integrate Nash equilibrium principles, thereby enhancing adaptability and dynamic decision-making capabilities within a multi-agent environment. The proposed algorithm is evaluated through extensive simulations, comparing total delay, resource utilization, and task success rates across various task offloading strategies. The results demonstrate that our Nash Q-learning approach outperforms conventional techniques by dramatically lowering overall latency, optimizing resource allocation, and maintaining high task success rates. This work demonstrates the efficacy of Nash Q-learning for dynamically setting priorities for tasks in multi-agent edge computing environments, providing a robust framework for effective resource management and collaborative task completion. And also highlight the potential of Nash Q-learning in advancing intelligent task management in edge computing, suggesting its applicability in real-world, resource-constrained environments.
Loading