Markov Games with Decoupled Dynamics: Price of Anarchy and Sample Complexity

Published: 01 Jan 2023, Last Modified: 12 Dec 2024CDC 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: This paper studies the finite-time horizon Markov games where the agents' dynamics are decoupled but the rewards can possibly be coupled across agents. The policy class is restricted to local policies where agents make decisions using their local state. We first introduce the notion of smooth Markov games which extends the smoothness argument for normal form games ([1], [2]) to our setting, and leverage the smoothness property to bound the price of anarchy of the Markov game. For a specific type of Markov game called the Markov potential game, we also develop a distributed learning algorithm, multi-agent soft policy iteration (MA-SPI), which provably converges to a Nash equilibrium. Sample complexity of the algorithm is also provided. Lastly, our results are validated using a dynamic covering game.
Loading