Pareto Actor-Critic for Equilibrium Selection in Multi-Agent Reinforcement Learning

Published: 24 Oct 2023, Last Modified: 24 Oct 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: This work focuses on equilibrium selection in no-conflict multi-agent games, where we specifically study the problem of selecting a Pareto-optimal Nash equilibrium among several existing equilibria. It has been shown that many state-of-the-art multi-agent reinforcement learning (MARL) algorithms are prone to converging to Pareto-dominated equilibria due to the uncertainty each agent has about the policy of the other agents during training. To address sub-optimal equilibrium selection, we propose Pareto Actor-Critic (Pareto-AC), which is an actor-critic algorithm that utilises a simple property of no-conflict games (a superset of cooperative games): the Pareto-optimal equilibrium in a no-conflict game maximises the returns of all agents and, therefore, is the preferred outcome for all agents. We evaluate Pareto-AC in a diverse set of multi-agent games and show that it converges to higher episodic returns compared to seven state-of-the-art MARL algorithms and that it successfully converges to a Pareto-optimal equilibrium in a range of matrix games. Finally, we propose PACDCG, a graph neural network extension of Pareto-AC, which is shown to efficiently scale in games with a large number of agents.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: - Fixed minor spelling, grammar, and capitalisation issues - Moved some figures forward - Added minor clarifications
Code: https://github.com/uoe-agents/epymarl
Assigned Action Editor: ~Marc_Lanctot1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 1365
Loading