ROGC: Role-Oriented Graph Convolution Based Multi-Agent Reinforcement LearningDownload PDFOpen Website

2022 (modified: 18 Apr 2023)ICME 2022Readers: Everyone
Abstract: The role-oriented learning approach could improve the performance of multi-agent reinforcement learning by decomposing complex multi-agent tasks into different roles. However, due to the dynamic environment and interactions among agents, the role undertaken by an agent changes rapidly with time going on. Therefore, the roles of agents should be adapted to the varying situation during the learning process. In this paper, we propose a role-oriented graph convolution based multi-agent reinforcement learning framework (ROGC). Firstly, we design a role assigner based on samples generated from the environment to learn roles for classifying agents into different groups. To further enhance cooperation among agents in the same group for higher performance, we design a graph convolutional module to achieve intra-role communications based on discovered roles. With roles and extracted role features, we design a role-oriented policy learning module that embeds the role information into the algorithm and generates effective policies for individuals. Further, we introduce an auto-encoder to learn the intra-role cooperation knowledge in the graph convolutional module, which ensures our framework executes in a decentralized way. Extensive experiments show that our framework can learn dynamic roles and make full use of learned roles, which makes it outperform popular MARL methods.
0 Replies

Loading