GraFix: A Graph Transformer with Fixed Attention Based on the WL Kernel

Published: 2024, Last Modified: 26 Sept 2025ICPR (4) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper we introduce GraFix, a novel graph transformer with fixed structural attention. Inspired by recent works 1) harnessing the link between (graph) kernels and the attention mechanism of transformers and 2) favouring simple fixed (non-learnable) attentive patterns over the standard attention mechanism, we propose to use graph kernels, specifically the WL kernel, to replace the learnable attention mechanism of a transformer with a fixed one capturing the structural similarity between substructures in the input graphs. The resulting graph transformer showcases an excellent performance on standard graph classification benchmarks, performing on-par with and in some instances outperforming a wide variety of alternative graph neural network and graph transformer-based approaches while at the same time benefiting from a reduced number of learnable parameters and learning runtime.
Loading