Keywords: Dynamic Graph Representation Learning, Higher-Order Graph Representation Learning, Transformer, Block-Recurrent Transformer
Abstract: Many graph representation learning (GRL) problems are dynamic, with millions
of edges added or removed per second. A fundamental workload in this setting
is dynamic link prediction: using a history of graph updates to predict whether a
given pair of vertices will become connected. Recent schemes for link predic-
tion in such dynamic settings employ Transformers, modeling individual graph
updates as single tokens. In this work, we propose HOT: a model that enhances
this line of works by harnessing higher-order (HO) graph structures; specifically,
k-hop neighbors and more general subgraphs containing a given pair of vertices.
Harnessing such HO structures by encoding them into the attention matrix of the
underlying Transformer results in higher accuracy of link prediction outcomes,
but at the expense of increased memory pressure. To alleviate this, we resort to
a recent class of schemes that impose hierarchy on the attention matrix, signifi-
cantly reducing memory footprint. The final design offers a sweetspot between
high accuracy and low memory utilization. HOT outperforms other dynamic
GRL schemes, for example achieving 9%, 7%, and 15% higher accuracy than –
respectively – DyGFormer, TGN, and GraphMixer, for the MOOC dataset. Our
design can be seamlessly extended towards other dynamic GRL workloads.
Supplementary Materials: zip
Submission Type: Full paper proceedings track submission (max 9 main pages).
Agreement: Check this if you are okay with being contacted to participate in an anonymous survey.
Poster: jpg
Poster Preview: jpg
Submission Number: 173
Loading