Explicitly Learning Policy Under Partial Observability in Multiagent Reinforcement Learning

Published: 2023, Last Modified: 26 Dec 2025IJCNN 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: We explore explicit solutions for multiagent reinforcement learning (MARL) under the constraint of partial observability. With a general framework of centralized training with decentralized execution (CTDE), existing methods implicitly alleviate partial observability by introducing global information during centralized training. However, such implicit solution cannot well address partial observability and shows low sample efficiency in many MARL problems. In this paper, we focus on the influence of partial observability on the policy of agents, and formally derive an ideal form of policy that maximizes MARL objective under partial observability. Furthermore, we develop a new method named Explicitly Learning Policy (ELP), which adopts a novel teacher-student structure and utilizes knowledge distillation to explicitly learn individual policy under partial observability for each agent. Compared to prior methods, ELP presents a more general and interpretable training process, and the procedure of ELP can be easily extended to existing methods for performance boost. Our empirical experiments on StarCraft II micromanagement benchmark show that ELP significantly outperforms prevailing state-of-the-art baselines, which demonstrates the advantage of ELP in addressing partial observability and improving sample efficiency.
Loading