Networked Communication for Mean-Field Games with Function Approximation and Empirical Mean-Field Estimation
Keywords: mean-field games, deep reinforcement learning, decentralised learning, networked communication, coordination
TL;DR: We show theoretically and empirically that networked communication allows agents to learn faster than both centralised and independent agents in this setting.
Abstract: Recent algorithms allow decentralised agents, possibly connected via a communication network, to learn equilibria in mean-field games from a non-episodic run of the empirical system. However, these algorithms are for tabular settings: this computationally limits the size of agents’ observation space, meaning the algorithms cannot handle anything but small state spaces, nor generalise beyond policies depending only on the agent’s local state to so-called ‘population-dependent’ policies. We address this limitation by introducing function approximation to the existing setting, drawing on the Munchausen Online Mirror Descent method that has previously been employed only in finite-horizon, episodic, centralised settings. While this permits us to include the mean field in the observation for players’ policies, it is unrealistic to assume decentralised agents have access to this global information: we therefore also provide new algorithms allowing agents to locally estimate the global empirical distribution, and to improve this estimate via inter-agent communication. We prove theoretically that exchanging policy information helps networked agents outperform both independent and even centralised agents in function-approximation settings. Our experiments demonstrate this happening empirically, and show that the communication network allows decentralised agents to estimate the mean field for population-dependent policies.
Supplementary Material: pdf
Type Of Paper: Full paper (max page 8)
Anonymous Submission: Anonymized submission.
Submission Number: 4
Loading