Subgoal-Based Hierarchical Reinforcement Learning for Multiagent Collaboration

Cheng Xu, Yuchen Shi, Changtian Zhang, Ran Wang, Shihong Duan, Yadong Wan, Xiaotong Zhang

Published: 01 Jan 2026, Last Modified: 26 Jan 2026IEEE Transactions on Systems, Man, and Cybernetics: SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Recent advancements in reinforcement learning (RL) have driven progress across various domains; however, RL algorithms often struggle in complex multiagent environments due to challenges such as instability, low sample efficiency, and the curse of dimensionality. Hierarchical RL (HRL) provides a structured framework for decomposing complex tasks into more manageable subtasks, making it a promising approach for multiagent systems. In this article, we introduce a novel hierarchical architecture that autonomously generates effective subgoals without explicit constraints, thereby enhancing both training stability and adaptability. To further improve sample efficiency and adaptability, we propose a dynamic goal-generation strategy that adjusts subgoals in response to environmental changes. Additionally, we address the critical challenge of credit assignment in multiagent settings by integrating our hierarchical architecture with a modified QMIX network, thereby facilitating more effective strategy coordination. Extensive comparative experiments against state-of-the-art RL algorithms demonstrate that our approach achieves superior convergence speed and overall performance in multiagent environments. These results validate the effectiveness and flexibility of our method in handling complex coordination tasks. The implementation is publicly available at https://github.com/SICC-Group/GMAH
Loading