Keywords: Continued fraction Q-learning, high-order interaction, interpretability, multi-agent reinforcement learning
TL;DR: A novel value decomposition framework of Continued Fraction Q-Learning (QCoFr) is proposed to model rich cooperation for multi-agent reinforcement learning without combinatorial explosion.
Abstract: The ability to model interactions among agents is crucial for effective coordination and understanding their cooperation mechanisms in multi-agent reinforcement learning (MARL).
However, previous efforts to model high-order interactions have been primarily hindered by the combinatorial explosion or the opaque nature of their black-box network structures.
In this paper, we propose a novel value decomposition framework, called Continued Fraction Q-Learning (QCoFr), which can flexibly capture arbitrary-order agent interactions with only linear complexity $\mathcal{O}\left({n}\right)$ in the number of agents, thus avoiding the combinatorial explosion when modeling rich cooperation.
Furthermore, we introduce the variational information bottleneck to extract latent information for estimating credits.
This latent information helps agents filter out noisy interactions, thereby significantly enhancing both cooperation and interpretability.
Extensive experiments demonstrate that QCoFr not only consistently achieves better performance but also provides interpretability that aligns with our theoretical analysis.
Supplementary Material: zip
Primary Area: Reinforcement learning (e.g., decision and control, planning, hierarchical RL, robotics)
Submission Number: 18630
Loading