Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning

Published: 17 Jun 2024, Last Modified: 11 Jul 20242nd SPIGM @ ICML PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: augmentation, diffusion, offline reinforcement learning, multi-agent learning
TL;DR: We present a novel diffusion-based episodes augmentation method for offline Multi-Agent Reinforcement Learning.
Abstract: Offline multi-agent reinforcement learning (MARL) is increasingly recognized as crucial for effectively deploying RL algorithms in environments where real-time interaction is impractical, risky, or costly. In the offline setting, learning from a static dataset of past interactions allows for the development of robust and safe policies without the need for live data collection, which can be fraught with challenges. Building on this foundational importance, we present EAQ, Episodes Augmentation guided by Q-total loss, a novel approach for offline MARL framework utilizing diffusion models. EAQ integrates the Q-total function directly into the diffusion model as a guidance to maximize the global returns in an episode, eliminating the need for separate training. Our focus primarily lies on cooperative scenarios, where agents are required to act collectively towards achieving a shared goal—essentially, maximizing global returns. Consequently, we demonstrate that our episodes augmentation in a collaborative manner significantly boosts offline MARL algorithm compared to the original dataset, improving the normalized return by +17.3% and +12.9% for $medium$ and $poor$ behavioral policies in SMAC simulator, respectively.
Submission Number: 70
Loading