Efficiently Quantifying Individual Agent Importance in Cooperative MARL

Published: 08 Feb 2024, Last Modified: 08 Feb 2024XAI4DRLEveryoneRevisionsBibTeX
Confirmation: I accept the constraint that If the paper will be accepted at least one of the authors will attend the workshop and present the work
Keywords: Multi-agent reinforcement learning, MARL, Evaluation, Credit assignment, Explainable AI, AI Safety
TL;DR: We propose an efficient method for measuring individual agent contributions in cooperative MARL and illustrate its usefulness as an explainability tool.
Abstract: Measuring the contribution of individual agents is challenging in cooperative multi-agent reinforcement learning (MARL). In cooperative MARL, team performance is typically inferred from a single shared global reward. Arguably, among the best current approaches to effectively measure individual agent contributions is to use Shapley values. However, calculating these values is expensive as the computational complexity grows exponentially with respect to the number of agents. In this paper, we adapt difference rewards into an efficient method for quantifying the contribution of individual agents, referred to as Agent Importance, offering a linear computational complexity relative to the number of agents. We show empirically that the computed values are strongly correlated with the true Shapley values, as well as the true underlying individual agent rewards, used as the ground truth in environments where these are available. We demonstrate how Agent Importance can be used to help study MARL systems by diagnosing algorithmic failures discovered in prior MARL benchmarking work. Our analysis illustrates Agent Importance as a valuable explainability component for future MARL benchmarks.
Submission Type: Long Paper
Submission Number: 3
Loading