The logic of rational graph neural networks

26 Sept 2024 (modified: 18 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Rational activations, Expressivity, Logic
TL;DR: This article article investigates the expressive power of Graph Neural Networks with rational activations.
Abstract: The expressivity of Graph Neural Networks (GNNs) can be described via appropriate fragments of the first-order logic. In this context, uniform expressivity guarantees that a GNN can express a logical query without the parameters depending on the size of the input graphs. It has been established that the two-variable guarded fragment with counting (GC2) can be expressed uniformly via Rectified Linear Unit (ReLU) GNNs [Barcelo &. Al., 2020]. Moreover, GC2 is the fragment that can be expressed at most by a GNN with any activation function. In this article, we prove that, on the contrary of ReLU GNNs, there are GC2 queries that cannot be uniformly expressed via any GNN with rational activations. As a consequence, non-polynomial activation functions do not grant GNNs GC2 uniform expressivity in general, answering an open question formulated by [Grohe, 2021]. We then present a strict subfragment of GC2 (RGC2), and prove that rational GNNs can express RGC2 queries uniformly over all graphs. Our numerical experiments illustrates that despite this theoretical disadvantage, rational GNNs are still able to learn some GC2 queries if some level of error is allowed.
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7943
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview