Graph Neural Network Is A Mean Field Game

25 Sept 2024 (modified: 22 Jan 2025)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph neural network, mean field game, reaction-diffusion equations, Hamiltonian flows
Abstract: In current graph neural networks (GNNs), it is a common practice to apply a pre-defined message passing heuristics to all graph data, even though the stereotypical relational inductive bias (e.g., graph heat diffusion) might not fit the unseen graph topology. Such gross simplification might be responsible for the lack of an in-depth understanding of graph learning principles, which challenges us to push the boundary from crafting application-specific GNNs to embracing a "meta-learning" paradigm. In this work, we ratchet the gear of GNN another notch forward by formulating GNN as a *mean field game*, that is, the best learning outcome occurs at the *Nash*-equilibrium when the learned graph inference rationale allows each graph node to find what is the best feature representations for not only the individual node but also the entire graph. Following this spirit, we formulate the search for novel GNN mechanism into a variational framework of *mean-field control* (MFC) problem, where the optimal relational inductive bias is essentially the critical point of mean-field information dynamics. Specifically, we seek for the best characteristic MFC functions of transportation mobility (controlling information exchange throughout the graph) and reaction mobility (controlling feature representation learning on each node), on the fly, that uncover the most suitable learning mechanism for a GNN instance by solving an MFC variational problem through the lens of *Hamiltonian flows* (formed in partial differential equations). In this context, our variational framework brings together existing GNN models into various mean-field games with distinct equilibrium states, each characterized by a unique MFC functional. Furthermore, we present an agnostic end-to-end deep model, coined *Nash-GNN* (in honor of Nobel laureate Dr. John Nash), to jointly carve the nature of the inductive bias and fine-tune the GNN hyper-parameters on top of the elucidated learning mechanism. *Nash-GNN* has achieved SOTA performance on diverse graph data including popular benchmark datasets and human connectomes. More importantly, the mathematical insight of mean-field games provides a new window to understand the foundational principles of graph learning as an interactive dynamical system, which allows us to reshape the idea of designing next-generation GNN models.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4844
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview