Reevaluating Policy Gradient Methods for Imperfect-Information Games

Published: 23 Jun 2025, Last Modified: 25 Jun 2025CoCoMARL 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: imperfect-information games, two-player zero-sum games, reinforcement learning, multi agent, game theory
TL;DR: We show that generic deep policy gradient methods may be stronger than previously understood for imperfect-information games.
Abstract:

In the past decade, motivated by the putative failure of naive self-play deep reinforcement learning (DRL) in adversarial imperfect-information games, researchers have developed numerous DRL algorithms based on fictitious play (FP), double oracle (DO), and counterfactual regret minimization (CFR). In light of recent results of the magnetic mirror descent algorithm, we hypothesize that simpler generic policy gradient methods like PPO are competitive with or superior to these FP-, DO-, and CFR-based DRL approaches. To facilitate the resolution of this hypothesis, we implement and release the first broadly accessible exact exploitability computations for four large games. Using these games, we conduct the largest-ever exploitability comparison of DRL algorithms for imperfect-information games. Over 5600 training runs, we find that FP-, DO-, and CFR-based approaches fail to outperform generic policy gradient methods.

Submission Number: 23
Loading