Keywords: Reinforcement Learning, Value function factorization, Multi-Agent
TL;DR: Recovers the optimal joint policy by iteratively recognizing potentially optimal joint actions and assigning higher weights to them.
Abstract: Value function factorization is widely used in cooperative multi-agent reinforcement learning (MARL).
Existing approaches often impose monotonicity constraints between the joint action value and individual action values to enable decentralized execution.
However, such constraints limit the expressiveness of value factorization, restricting the range of joint action values that can be represented and hindering the learning of optimal policies.
To address this, we propose Potentially Optimal Joint Actions Weighting (POW), a method that ensures optimal policy recovery where existing approximate weighting strategies may fail.
POW iteratively identifies potentially optimal joint actions and assigns them higher training weights through a theoretically grounded iterative weighted training process. We prove that this mechanism guarantees recovery of the true optimal policy, overcoming the limitations of prior heuristic weighting strategies.
POW is architecture-agnostic and can be seamlessly integrated into existing value factorization algorithms.
Extensive experiments on matrix games, difficulty-enhanced predator-prey tasks, SMAC, SMACv2, and a highway-env intersection scenario show that POW substantially improves stability and consistently surpasses state-of-the-art value-based MARL methods.
Primary Area: reinforcement learning
Submission Number: 24619
Loading