Abstract: Fully connected Graph Transformers (GT) have rapidly become prominent in
the static graph community as an alternative to Message-Passing models, which
suffer from a lack of expressivity, oversquashing, and under-reaching. However,
in a dynamic context, by interconnecting all nodes at multiple snapshots with
self-attention,GT loose both structural and temporal information. In this work, we
introduce Supra-LAplacian encoding for spatio-temporal TransformErs ( SLATE ),
a new spatio-temporal encoding to leverage the GT architecture while keeping
spatio-temporal information. Specifically, we transform Discrete Time Dynamic
Graphs into multi-layer graphs and take advantage of the spectral properties of their
associated supra-Laplacian matrix. Our second contribution explicitly model nodes’
pairwise relationships with a cross-attention mechanism, providing an accurate edge
representation for dynamic link prediction. SLATE outperforms numerous state-of-
the-art methods based on Message-Passing Graph Neural Networks combined with
recurrent models (e.g. , LSTM), and Dynamic Graph Transformers, on 9 datasets.
Code and instructions to reproduce our results will be open-sourced
Loading