Decentralized Robust V-learning for Solving Markov Games with Model UncertaintyDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Machine Learning, Reinforcement Learning, Markov Games
TL;DR: Robust reinforcement learning algorithm for Markov games
Abstract: Markov game is a popular reinforcement learning framework for modeling competitive players in a dynamic environment. However, most of the existing works on Markov game focus on computing a certain equilibrium following uncertain interactions among the players, but ignores the uncertainty of the environment model, which is ubiquitous in practical scenarios. In this work, we develop a tractable solution to Markov games with model uncertainty. Specifically, we propose a new and tractable notion of robust correlated equilibrium for Markov games with environment model uncertainty. In particular, we prove that robust correlated equilibrium has a simple modification structure, and its characterization of equilibrium critically depends on the environment model uncertainty. Moreover, we propose the first fully-decentralized sample-based algorithm for computing such robust correlated equilibrium. Our analysis proves that the algorithm achieves the polynomial sample complexity $\widetilde{\mathcal{O}}( SA^2 H^5 p_{\min}^{-2}\epsilon^{-2})$ for computing an approximate robust correlated equilibrium with $\epsilon$ accuracy.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Reinforcement Learning (eg, decision and control, planning, hierarchical RL, robotics)
12 Replies

Loading