Characterizing the Convergence of Game Dynamics via Potentialness

TMLR Paper3323 Authors

10 Sept 2024 (modified: 22 Sept 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Understanding the convergence landscape of multi-agent learning is a fundamental problem of great practical relevance in many applications of artificial intelligence and machine learning. In general, it is well known that learning dynamics converge to Nash equilibrium in potential games - but, at the same time, many important classes of games do not admit a potential (exact or even ordinal), so this convergence does not have universal applicability. In an effort to measure how ``close'' a game is to being potential, we consider a distance function, that we call ``potentialness'', and which relies on a strategic decomposition of games introduced by Candogan et al. (2011). We introduce a numerical framework enabling the computation of this metric, which we use to calculate the degree of ``potentialness'' in a large class of generic matrix games, as well as in certain classes of games that have been well-studied in economics, but are known not to be generic - such as auctions and contests, which have become increasingly important due to the wide-spread automation of bidding and pricing with no-regret learning algorithms. We empirically show that potentialness decreases and concentrates with an increasing number of agents or actions; in addition, potentialness turns out to be a good predictor for the existence of pure Nash equilibria and the convergence of no-regret learning algorithms in matrix games. In particular, we observe that potentialness is very low for all-pay auctions and much higher for Tullock contests, first-, and second-price auctions, explaining the success of learning in the latter.
Submission Length: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Marcello_Restelli1
Submission Number: 3323
Loading