On the Fundamental Limitations of Decentralized Learnable Reward Shaping in Cooperative Multi-Agent Reinforcement Learning
Keywords: Multi-agent reinforcement learning, reward shaping, coordination, decentralized learning
TL;DR: Decentralized learnable reward shaping in cooperative multi-agent reinforcement learning fails to overcome coordination challenges, exposing fundamental limits in non-stationarity, credit assignment, and objective alignment.
Abstract: Recent advances in learnable reward shaping have shown promise in single-agent reinforcement learning by automatically discovering effective feedback signals. However, the effectiveness of decentralized learnable reward shaping in cooperative multi-agent settings remains poorly understood. We propose DMARL-RSA, a fully decentralized system where each agent learns individual reward shaping, and evaluate it on cooperative navigation tasks in the simple\_spread\_v3 environment. Despite sophisticated reward learning, DMARL-RSA achieves only $-24.20 \pm 0.09$ average reward, compared to MAPPO with centralized training at $1.92 \pm 0.87$—a 26.12-point gap. DMARL-RSA performs similarly to simple independent learning (IPPO: $-23.19 \pm 0.96$), indicating that advanced reward shaping cannot overcome fundamental decentralized coordination limitations. Interestingly, decentralized methods achieve higher landmark coverage ($0.888 \pm 0.029$ for DMARL-RSA, $0.960 \pm 0.045$ for IPPO out of 3 total) but worse overall performance than centralized MAPPO ($0.273 \pm 0.008$ landmark coverage)—revealing a coordination paradox between local optimization and global performance. Analysis identifies three critical barriers: (1) non-stationarity from concurrent policy updates, (2) exponential credit assignment complexity, and (3) misalignment between individual reward optimization and global objectives. These results establish empirical limits for decentralized reward learning and underscore the necessity of centralized coordination for effective multi-agent cooperation.
Email Sharing: We authorize the sharing of all author emails with Program Chairs.
Data Release: We authorize the release of our submission and author names to the public in the event of acceptance.
Submission Number: 20
Loading