Teamwork makes von Neumann work:Min-Max Optimization in Two-Team Zero-Sum GamesDownload PDF

Published: 28 Jan 2022, Last Modified: 13 Feb 2023ICLR 2022 SubmittedReaders: Everyone
Keywords: Min-max Optimization, Non-convex Optimization, Multi-agent learning, Multi-agent GANs, Game Theory, Duality Gap
Abstract: Motivated by recent advances in both theoretical and applied aspects of multiplayer games, spanning from e-sports to multi-agent generative adversarial networks, we focus on min-max optimization in team zero-sum games. In this class of games, players are split in two teams with payoffs equal within the same team and of opposite sign across the opponent team. Unlike the textbook two player zero-sum games, finding a Nash equilibrium in our class can be shown to be $\textsf{CLS}$-hard, i.e., it is unlikely to have a polynomial time algorithm for computing Nash equilibria. Moreover In this generalized framework, we establish that even asymptotic last iterate or time average convergence to a Nash Equilibrium is not possible using Gradient Descent Ascent (GDA), its optimistic variant and extra gradient. Specifically, we present a family of team games whose induced utility is non-multilinear with non-attractive $\textit{per-se}$ mixed Nash Equilibria, as strict saddle points of the underlying optimization landscape. Leveraging techniques from control theory, we complement these negative results by designing a modified GDA that converges locally to Nash equilibria. Finally, we discuss connections of our framework with AI architectures with team competition structure like multi-agent generative adversarial networks.
One-sentence Summary: First-order methods fail to converge to Mixed Nash Equilibria in Team Zero-Sum Games.
Supplementary Material: zip
30 Replies

Loading