ENHANCEMENT OF GNN’S EXPRESSIVE POWER VIA RECONSIDERING MODAL LOGIC

21 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph Neural Networks, Expressiveness, Modal theory
TL;DR: We introduce a novel measurement of GNNs' expressivity and propose a GNN framework going beyond 1-WL
Abstract: Since AC-GNNs, in which nodes only gather information from their neighbors to update features at each layer, are limited in their expressive power, numerous models have been proposed to enable GNNs to go beyond Weisfeiler-Lehman (WL) test. However there still a lack of effective methods to measure these models' expressive power: for a specific task, it is still difficult to figure out whether the model is competent for the task. We tackle this problem by finding equivalent Boolean classifiers logic for models. By checking whether the task is able to be expressed as model's equivalent Boolean classifiers logic formula, we can be aware of whether the model is competent for task. We propose a framework for AC-GNNs, denoted as l-div AC-GNNs, to enhance AC-GNNs' expressive power. To be more specific, we classify node's neighbors according to existence of different length of paths from node's neighbors to itself. To find l-div AC-GNNs' equivalent Boolean classifiers logic, we introduce the l-div graded modal logic and prove that a Boolean logical classifiers can be expressed by l-div graded modal logic if and only if there exists a l-div AC-GNN which is able to capture it. In this paper, three properties are defined for the framework: invariance and equivariance, approximation and logic expressive power, we proved l-div AC-GNNs are possessing with these properties. A series of tasks have been implemented to validate our theoretics, the results of experiments demonstrate the validities of both our method to measure models' expressive power and expressive power of l-div AC-GNNs.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3416
Loading