Keywords: Operator learning, mean-field games, parametric complexity
TL;DR: We provide an operator learning method for solving finite-state mean-field games, equipped with rigorous approximation and generalization guarantees.
Abstract: Finite-state mean-field games (MFGs) arise as limits of large interacting particle systems and are governed by an MFG system, a coupled forward–backward differential equation consisting of a forward Kolmogorov–Fokker–Planck (KFP) equation describing the population distribution and a backward Hamilton–Jacobi–Bellman (HJB) equation defining the value function. Solving MFG systems efficiently is challenging, with the structure of each system depending on an initial distribution of players and the terminal cost of the game. We propose an operator learning framework that solves parametric families of MFGs, enabling generalization without retraining for new initial distributions and terminal costs. We provide theoretical guarantees on the approximation error, parametric complexity, and generalization performance of our method, based on a novel regularity result for an appropriately defined flow map corresponding to an MFG system. We then demonstrate empirically that our framework achieves accurate approximation for two representative instances of MFGs: a cybersecurity example and a high-dimensional quadratic model commonly used as a benchmark for numerical methods for MFGs.
Supplementary Material: zip
Primary Area: learning on time series and dynamical systems
Submission Number: 20321
Loading