Learning Two-Player Mixture Markov Games: Kernel Function Approximation and Correlated EquilibriumDownload PDF

12 Oct 2021 (modified: 05 May 2023)Deep RL Workshop NeurIPS 2021Readers: Everyone
Abstract: We consider learning Nash equilibrium in two-player zero-sum Markov games with nonlinear function approximation, where the action-value function is approximated by a function in the Reproducing Kernel Hilbert Space (RKHS). The key challenge is how to do exploration in the high-dimensional function space. We propose novel online learning algorithms to find an approximate Nash equilibrium by minimizing the duality gap. At the core of our algorithms are upper and lower confidence bounds that are derived based on the principle of optimism in the face of uncertainty. We prove that our algorithm is able to attain an $O(\sqrt{T})$ regret with polynomial computational complexity, under very mild assumptions on the reward function and the underlying dynamic of the Markov Games. This work provides the first complexity results for learning two-player zero-sum Markov games with nonlinear function approximation in the mixture model settings, and its implications for function approximation via deep neural networks.
0 Replies

Loading