Smooth Fictitious Play in Stochastic Games with Perturbed Payoffs and Unknown TransitionsDownload PDF

Published: 31 Oct 2022, Last Modified: 17 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: game theory, stochastic games, fictitious play, smooth best response, zero-sum stochastic games
Abstract: Recent extensions to dynamic games of the well known fictitious play learning procedure in static games were proved to globally converge to stationary Nash equilibria in two important classes of dynamic games (zero-sum and identical-interest discounted stochastic games). However, those decentralized algorithms need the players to know exactly the model (the transition probabilities and their payoffs at every stage). To overcome these strong assumptions, our paper introduces regularizations of the recent algorithms which are moreover, model-free (players don't know the transitions and their payoffs are perturbed at every stage). Our novel procedures can be interpreted as extensions to stochastic games of the classical smooth fictitious play learning procedures in static games (where players best responses are regularized, thanks to a smooth perturbation of their payoff functions). We prove the convergence of our family of procedures to stationary regularized Nash equilibria in the same classes of dynamic games (zero-sum and identical interests discounted stochastic games). The proof uses the continuous smooth best-response dynamics counterparts, and stochastic approximation methods. In the case of a MDP (a one-player stochastic game), our procedures globally converge to the optimal stationary policy of the regularized problem. In that sense, they can be seen as an alternative to the well known Q-learning procedure.
TL;DR: We extend smooth fictitious play to stochastic games based on recent extensions of fictitious play.
Supplementary Material: zip
18 Replies