Fast Teammate Adaptation in the Presence of Sudden Policy ChangeDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: Multi-agent reinforcement learning, Teammate adaptation, Open-environment coordination, Non-stationary reinforcement learning
TL;DR: A new framework Fastap to handle the situation where the teammates policy suffer from sudden change within one episode in cooperative MARL.
Abstract: Cooperative multi-agent reinforcement learning (MARL), where agents coordinates with teammate(s) for a shared goal, may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change cross episodes, ignoring the fact that teammates may suffer from sudden policy change within an episode, which might lead to miscoordination and poor performance. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework Fast teammates adaptation (Fastap) to address the problem. Concretely, we first train versatile teammates' policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates' context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies

Loading