Abstract: Multi-agent reinforcement learning (MARL) has demonstrated its superiority in addressing complex decision-making tasks involving multiple agents. However, the intricate and dynamic interactions among agents make this problem exceptionally challenging. Existing MARL methods often simplify the problem by implicitly decomposing shared rewards into individual utilities, neglecting the underlying interconnections between relevant entities. To overcome these limitations, we propose a novel framework of Multi-Agent Pattern Extraction (MAPE), which captures cooperation patterns from both agent-level and global perspectives to enhance decision-making and collaboration efficiency. Specifically, MAPE introduces two key modules: the Agent Pattern Extractor (APE) and the Global Pattern Extractor (GPE) that focus on specific interactions between the entities of interests from individual and global perspectives, respectively. The APE module focuses on computing each agent’s attention to other entities across different interaction patterns, providing this information to the GPE module. The GPE module then integrates the agent-specific pattern information and state features to identify the overall interaction pattern of the multi-agent system. By filtering out irrelevant interactions between unrelated entities and highlighting meaningful relationships, MAPE fosters more focused cooperation and facilitates more efficient learning. Extensive experiments on the StarCraft II micromanagement benchmark showcase the effectiveness of MAPE in improving efficiency in complex multi-agent environments.
External IDs:dblp:conf/ijcnn/ZangHLXC25
Loading