Keywords: Graph Transformers, Graph Neural Networks, Structural Encodings, Green Kernel, Martin Kernel, Non-aperiodic substructures, DAGs
TL;DR: We proposed new structural encodings for graph transformers based on the Green and Martin kernels. Our approaches achieve SOTA performance on 7 out of 8 benchmark datasets, particularly excelling in molecular and circuit graphs.
Abstract: Graph Transformers (GTs) are rapidly emerging as superior models, surpassing traditional message-passing neural networks in graph-level tasks. For optimal performance, it is essential to design GT architectures that embed graph inductive biases and utilize global attention mechanisms through effective structural encodings (SEs). In this work, we introduce novel SEs derived from a rigorous theoretical analysis of random walks (RWs), specifically leveraging the Green and Martin kernels. The Green and Martin kernels are mathematical tools used to observe the long-term behavior of RWs on graphs. By integrating these kernels into the encoding process, we enhance their capability to accurately represent complex graph structures. Our empirical evaluations demonstrate that these approaches enable GTs to achieve state-of-the-art performance on 7 out of 8 benchmark datasets. These include molecular datasets characterized by intricate, non-aperiodic substructures such as benzene rings, and directed acyclic graphs common in the circuit domain. We attribute these performance improvement to the effective capture of the characteristics of non-aperiodic substructures and directed acyclic graphs by our extending encodings. The results not only validate the effectiveness of integrating the Green and Martin kernels into RW-based encodings but also underscore their potential to substantially enhance the learning capabilities of GTs across diverse applications.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9128
Loading