Sample Efficient Stochastic Policy Extragradient Algorithm for Zero-Sum Markov GameDownload PDF


Sep 29, 2021 (edited Oct 05, 2021)ICLR 2022 Conference Blind SubmissionReaders: Everyone
  • Keywords: Two-player Zero-sum Markov game, Entropy regularization, Policy extragradient, Nash equilibrium, Sample complexity
  • Abstract: Two-player zero-sum Markov game is a fundamental problem in reinforcement learning and game theory. Although many algorithms have been proposed for solving zero-sum Markov games in the existing literature, they generally lack the desired and important features such as model-free, provably convergent, sample efficient, symmetric and private policy updates, etc. In this paper, we develop a fully decentralized stochastic policy extragradient algorithm with all these properties for solving zero-sum Markov games. In particular, our algorithm introduces multiple stochastic estimators to accurately estimate the value functions involved in the stochastic updates, and leverages entropy regularization to accelerate the convergence. Specifically, with a proper entropy-regularization parameter, we prove that the stochastic policy extragradient algorithm has a sample complexity of the order $\mathcal{O}(\frac{t_{\text{mix}}A_{\max}}{\mu_{\text{min}}\epsilon^{5.5}(1-\gamma)^{13.5}})$ for finding a solution that achieves $\epsilon$-Nash equilibrium duality gap. Such a sample complexity result substantially improves the state-of-the-art complexity results.
  • One-sentence Summary: This paper proposes a fully decentralized, model-free, provably convergent, sample efficient stochastic policy extragradient algorithm with symmetric and private policy updates
0 Replies