Keywords: Sequence model, state space model, Mamba, multi-agent reinforcement learning, self-predictive representation learning
TL;DR: This paper introduces a novel framework that leverages the Mamba model to predict agents' future observations, aiding in more stable and informed decision-making.
Abstract: In multi-agent reinforcement learning (MARL), agents must collaborate to achieve team goals while only having access to limited local observations. This partial observability, coupled with the dynamic presence of other agents, renders the environment non-stationary for each agent, complicating the policy training. A critical challenge in this setting is the efficient utilization of historical information for decision-making. Building on the hypothesis that self-predictive features can improve policy learning, we introduce the self-predictive Mamba, a novel framework that integrates the Mamba model with self-predictive representation learning for decentralized policy optimization. Self-predictive Mamba leverages a unique policy architecture where the Mamba model is trained to predict future observations, aiding in more stable and informed decision-making. Substantial experiments demonstrate that self-predictive Mamba significantly outperforms the widely used recurrent neural network (RNN)-based MARL policies and surpasses those naively employing the Mamba model.
Supplementary Material: zip
Primary Area: reinforcement learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4326
Loading