- Abstract: Multiagent systems where the agents interact among themselves and with an stochastic environment can be formalized as stochastic games. We consider a subclass of these games, named Markov potential games (MPGs), that appear often in economic and engineering applications, where the agents share or compete for some common resource, the state-action sets are continuous, rewards might be nonconvex functions, and there might be coupled constraints. Previous analysis followed a variational approach that is only valid for very simple cases (convex rewards, invertible dynamics, and no coupled constraints); or considered deterministic dynamics and provided open-loop (OL) analysis, studying strategies that consist in predefined action sequences, which are not optimal for stochastic environments. We present a closed-loop (CL) analysis for MPGs and consider policies that depend on the current state and where agents adapt to stochastic transitions. Following state-of-the-art results for single-agent problems obtained with deep-reinforcement-learning, we consider the agents' policies belong to some parametric class (e.g., deep neural networks). We provide sufficient and necessary, easily verifiable conditions for a stochastic game to be an MPG, and show that a closed-loop Nash equilibrium can be found (or at least approximated) by solving a related optimal control problem (OCP). This is useful since solving an OCP---which is a single-objective problem---is usually much simpler than solving the original set of coupled OCPs that form the game---which is a multiobjective control problem. This is a considerable improvement over the previously standard approach for the CL analysis of MPGs, which gives no approximate solution if no Nash equilibrium belongs to the chosen parametric family, and which is practical only for simple parametric forms. We illustrate the theoretical contributions with an example by applying our approach to a noncooperative communications engineering game. We then solve the game with a deep reinforcement learning (DRL) algorithm and learn a set of policies (one per agent) that closely approximates an exact variational Nash equilibrium of the game.
- TL;DR: We present general closed loop analysis for Markov potential games and show that deep reinforcement learning can be used for learning approximate closed-loop Nash equilibrium.
- Keywords: Stochastic games, potential games, closed loop, reinforcement learning, multiagent systems