Achieving Logarithmic Regret in KL-Regularized Zero-Sum Markov Games

ICLR 2026 Conference Submission22158 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Matrix Games, Markov Games, KL Regularization, Logarithmic Regret
TL;DR: We design learning algorithms that achieve provably superior sample efficiency in game-theoretic settings when equipped with KL regularization compared to the unregularized case
Abstract: Reverse Kullback–Leibler (KL) divergence-based regularization with respect to a fixed reference policy is widely used in modern reinforcement learning to preserve the desired traits of the reference policy and sometimes to promote exploration (using uniform reference policy, known as entropy regularization). Beyond serving as a mere anchor, the reference policy can also be interpreted as encoding prior knowledge about good actions in the environment. In the context of alignment, recent game-theoretic approaches have leveraged KL regularization with pretrained language models as reference policies, achieving notable empirical success in self-play–based methods. Despite these advances, the theoretical benefits of KL regularization in game-theoretic settings remain poorly understood. In this work, we develop and analyze algorithms that provably achieve improved sample efficiency under KL regularization. We study both two-player zero-sum Matrix games and Markov games: for Matrix games, we propose $\texttt{OMG}$, an algorithm based on best response sampling with optimistic bonuses, and extend this idea to Markov games through the algorithm $\texttt{SOMG}$, which also uses best response sampling and a novel concept of superoptimistic bonuses. Both algorithms achieve a logarithmic regret in $T$ that scales inversely with the KL regularization strength $\beta$ in addition to the standard $\widetilde{\mathcal{O}}(\sqrt{T})$ regret independent of $\beta$ which is attained in both regularized and unregularized settings.
Supplementary Material: pdf
Primary Area: learning theory
Submission Number: 22158
Loading