Keywords: Transformer, Structural Feature, Graph Classification
Abstract: Graph Transformers have emerged as a promising alternative to Graph Neural Networks (GNNs), offering global attention that mitigates oversmoothing and oversquashing issues. However, their success critically depends on how structural information is encoded, especially for graph-level tasks such as molecular property prediction. Existing positional and structural encodings capture some aspects of topology, yet overlook the diverse and interacting substructures that shape graph behavior. In this work, we introduce Gisty Intersection Signature Trait (GIST), a structural encoding based on the intersection cardinalities of k-hop neighborhoods between node pairs. GIST provides a permutation-invariant representation that is theoretically expressive, while remaining scalable through efficient randomized estimation. Incorporated as an attention feature, GIST enables Graph Transformers to capture fine-grained substructures together with node-pairwise relationships that underlie long-range interactions. Across diverse and comprehensive benchmarks, GIST maintains a uniformly strong performance profile: head-to-head evaluations consistently favor GIST, underscoring its role as a simple and expressive structural feature for Graph Transformers.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 13433
Loading