Outbidding and Outbluffing Elite Humans: Mastering Liar’s Poker via Self-Play and Reinforcement Learning

Published: 02 Mar 2026, Last Modified: 02 Mar 2026MALGAIEveryoneRevisionsBibTeXCC BY 4.0
Keywords: multi-agent, self-play, reinforcement learning, large language models
TL;DR: We introduce Solly, the first AI agent to play Liar's Poker (a multiplayer, imperfect information game) at the elite human level
Abstract: AI researchers have long focused on poker-like games as a testbed for environments characterized by multi-player dynamics, imperfect information, and reasoning under uncertainty. While recent breakthroughs have matched elite human play at no-limit Texas hold'em, the multi-player dynamics are subdued; most hands converge quickly with only two players engaged through multiple rounds of bidding. In this paper, we present Solly, the first AI agent to achieve elite human play in reduced-format Liar’s Poker, a game characterized by extensive multi-player engagement. We trained Solly using self-play with a model-free, actor-critic, deep reinforcement learning algorithm. Solly played at an elite human level as measured by win rate (won over 50\% of hands) and equity (money won) in heads-up and multi-player Liar’s Poker. Solly also outperformed large language models (LLMs), including those with reasoning abilities, on the same metrics. We integrated Solly with LLMs via Model Context Protocol (MCP) to probe cooperative play with the agent and opponent analysis. Solly developed novel bidding strategies, randomized play effectively, and was not easily exploitable by world-class human players.
Submission Number: 79
Loading