From Theory to Practice: Rethinking Green and Martin Kernels for Unleashing Graph Transformers

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC-ND 4.0
TL;DR: We proposed new structural encodings for graph transformers based on the Green and Martin kernels. Our approaches achieve SOTA performance on 7 out of 8 benchmark datasets, particularly excelling in molecular and circuit graphs.
Abstract: Graph Transformers (GTs) have emerged as a powerful alternative to message-passing neural networks, yet their performance heavily depends on effectively embedding structural inductive biases. In this work, we introduce novel structural encodings (SEs) grounded in a rigorous analysis of random walks (RWs), leveraging Green and Martin kernels that we have carefully redefined for AI applications while preserving their mathematical essence.These kernels capture the long-term behavior of RWs on graphs and allow for enhanced representation of complex topologies, including non-aperiodic and directed acyclic substructures.Empirical evaluations across eight benchmark datasets demonstrate strong performance across diverse tasks, notably in molecular and circuit domains.We attribute this performance boost to the improved ability of our kernel-based SEs to encode intricate structural information, thereby strengthening the global attention and inductive bias within GTs.This work highlights the effectiveness of theoretically grounded kernel methods in advancing Transformer-based models for graph learning.
Lay Summary: Graph Neural Networks (GNNs) are powerful tools used to analyze data represented as graphs, such as social networks, molecules, or circuits. A key challenge in these models is how to represent the structure of a graph in a way that the model can effectively understand and use. These representations are called "structural encodings." Our work introduces a new way to compute structural encodings for graphs using mathematical tools known as **Green and Martin kernels**, which come from the field of probability and random processes. These kernels have been studied for decades in mathematics but were not used in machine learning models because they were too difficult to compute or apply directly. We reformulate these kernels so that they can be used efficiently in GNNs, including graph transformers—advanced models that have shown strong performance in many applications. The result is a new type of structural encoding that better captures complex patterns in graphs, especially in settings like circuit analysis or molecule prediction. Importantly, our method is both **theoretically grounded** and **practical**: it works well in real-world tasks, scales to large graphs, and can be used with existing graph models without requiring any architectural changes. We believe this work opens new directions for combining deep learning with rigorous mathematical tools, enabling more powerful and reliable models for graph-based data.
Link To Code: https://github.com/yoonhuk30/GKSE-MKSE
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Transformers, Graph Neural Networks, Structural Encodings, Green Kernel, Martin Kernel, Non-aperiodic substructures, DAGs
Submission Number: 10465
Loading