Abstract: In the realm of multi-agent systems, the application of reinforcement learning algorithms frequently confronts distinct challenges rooted in the non-stationarity and intricate nature of the environment. This paper presents an innovative methodology, denoted as Multi-Agent Diffuser (MA-Diffuser), which leverages diffusion models to encapsulate policies within a multi-agent context, thereby fostering efficient and expressive inter-agent coordination. Our methodology embeds the action-value maximization within the sampling process of the conditional diffusion model, thereby facilitating the detection of optimal actions closely aligned with the behavior policy. This strategy capitalizes on the expressive power of diffusion models, while simultaneously mitigating the prevalent function approximation errors often found in offline reinforcement learning environments. We have validated the efficacy of our approach within the Multi-Agent Particle Environment, and envisage its future extension to a broader range of tasks.
Loading