TL;DR: We propose a novel surrogate unbiased loss function for learning a Nash equilibrium in normal-form games via stochastic optimization techniques from machine learning.
Abstract: Nash equilibrium (NE) plays an important role in game theory. How to efficiently compute an NE in NFGs is challenging due to its complexity and non-convex optimization property. Machine Learning (ML), the cornerstone of modern artificial intelligence, has demonstrated remarkable empirical performance across various applications including non-convex optimization. To leverage non-convex stochastic optimization techniques from ML for approximating an NE, various loss functions have been proposed. Among these, only one loss function is unbiased, allowing for unbiased estimation under the sampled play. Unfortunately, this loss function suffers from high variance, which degrades the convergence rate. To improve the convergence rate by mitigating the high variance associated with the existing unbiased loss function, we propose a novel surrogate loss function named Nash Advantage Loss (NAL). NAL is theoretically proved unbiased and exhibits significantly lower variance than the existing unbiased loss function. Experimental results demonstrate that the algorithm minimizing NAL achieves a significantly faster empirical convergence rates compared to other algorithms, while also reducing the variance of estimated loss value by several orders of magnitude.
Lay Summary: (1) Leveraging non-convex stochastic optimization techniques from machine learning for Nash equilibria computation remains largely unexplored, primarily due to the lack of an appropriate loss function. (2) We propose a novel surrogate unbiased loss function with low variance. (3) This will help to improve the convergence rate in learning Nash equilibria with non-convex stochastic optimization techniques from machine learning.
Primary Area: Theory->Game Theory
Keywords: Stochastic Optimization, Nash Equilibrium, Normal-Form Games
Submission Number: 10862
Loading