Agent-Temporal Credit Assignment for Optimal Policy Preservation in Sparse Multi-Agent Reinforcement Learning

Published: 01 Jun 2024, Last Modified: 17 Jun 2024CoCoMARL 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multi-agent reinforcement learning, cooperation, temporal credit assignment, multi-agent credit assignment, delayed rewards, episodic rewards, policy equivalence, potential based reward shaping, return decomposition
TL;DR: We propose a reward redistribution function to address agent-temporal credit assignment in multi-agent reinforcement learning with theoretical gurantees to preserve optimal policies
Abstract: The ability of agents to learn optimal policies is hindered in multi-agent environments where all agents receive a global reward signal sparsely or only at the end of an episode. The delayed nature of these rewards, especially in long-horizon tasks, makes it challenging for agents to evaluate their actions at intermediate time steps. In this paper, we propose Agent-Temporal Reward Redistribution (ATRR), a novel approach to tackle the agent-temporal credit assignment problem by redistributing sparse environment rewards both temporally and at the agent level. ATRR first decomposes the sparse global rewards into rewards for each time step and then calculates agent-specific rewards by determining each agent's relative contribution to these decomposed temporal rewards. We theoretically prove that there exists a redistribution method equivalent to potential-based reward shaping, ensuring that the optimal policy remains unchanged. Empirically, we demonstrate that ATRR stabilizes and expedites the learning process. We also show that ATRR, when used alongside single-agent reinforcement learning algorithms, performs as well as or better than their multi-agent counterparts.
Submission Number: 22
Loading