MARL-GPT: Foundation Model for Multi-Agent Reinforcement Learning

Published: 17 Dec 2025, Last Modified: 17 Dec 2025WoMAPF OralEveryoneRevisionsCC BY 4.0
Keywords: Multi-agent Pathfinding, Multi-agent Reinforcement Learning, Multi-Task Learning, Transformers
Abstract: Recent advances in multi-agent reinforcement learning (MARL) have demonstrated success in numerous challenging domains and environments, but typically require specialized models for each task. In this work, we propose a coherent methodology that makes it possible for a single GPT-based model to learn and perform well across diverse MARL environments and tasks, including collision-avoidance and coordination problems (such as multi-agent path finding scenarios demonstrated in POGEMA), alongside established benchmarks like StarCraft Multi-Agent Challenge and Google Research Football. Our method, MARL-GPT, applies offline reinforcement learning to train at scale on expert trajectories (400M for SMACv2, 100M for GRF, and 1B for POGEMA) combined with a single transformer-based observation encoder that requires no task-specific tuning. By leveraging offline RL, we address the long-horizon planning and coordination challenges inherent in MAPF-like problems, enabling efficient learning without costly online environment interaction. Experiments show that MARL-GPT achieves competitive performance compared to specialized baselines in all tested environments. Thus, our findings suggest that it is, indeed, possible to build a multi-task transformer-based model for a wide variety of (significantly different) multi-agent problems paving the way to the fundamental MARL model (akin to ChatGPT, Llama, Mistral etc. in natural language modeling).
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 16
Loading