Deep Reinforcement Learning for Equilibrium Computation in Multi-Stage Auctions and Contests

Published: 18 Jun 2024, Last Modified: 16 Jul 2024Agentic Markets @ ICML'24 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep reinforcement learning, Nash equilibrium, auctions, contests
TL;DR: Deep reinforcement learning with verification enables effective approximation of equilibria in continuous multi-stage auction and contest games, enabling new strategic insights.
Abstract: We compute equilibrium strategies in multi-stage games with continuous signal and action spaces as they are widely used in the management sciences and economics. Examples include sequential sales via auctions, multi-stage elimination contests, and Stackelberg Bertrand competitions. While such models are fundamental to game theory and its applications, equilibrium strategies are rarely known. The resulting system of non-linear differential equations is considered intractable for all but elementary models. This has been limiting progress in game theory and is a barrier to its adoption in the field. We show that Deep Reinforcement Learning and self-play can learn equilibrium bidding strategies for various multi-stage games. We find equilibrium in models that have not yet been explored analytically and new asymmetric equilibrium bid functions for established models of sequential auctions. The verification of equilibrium is challenging in such games due to the continuous signal and action spaces. We introduce a verification algorithm and prove that the error of this verifier decreases when considering Lipschitz continuous strategies with increasing levels of discretization and sample sizes.
Submission Number: 11
Loading