Directed Graph Transformers

Published: 10 Jul 2024, Last Modified: 10 Jul 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: In this paper, we address the problem of capturing graph directionality using transformers. Most existing graph transformers typically capture distances between graph nodes and do not take edge direction into account. This is a limiting assumption since many graph applications need to exploit sophisticated relationships in graph data, such as time, causality, or generic dependency constraints. We introduce a novel graph transformer architecture that explicitly takes into account the directionality between connected graph nodes. To achieve this, we make use of dual encodings to represent both potential roles, i.e., source or target, of each pair of vertices linked by a directed edge. These encodings are learned by leveraging the latent adjacency information extracted from a directional attention module, localized with $k$-hop neighborhood information. Extensive experiments on synthetic and real graph datasets show that our approach can have significant accuracy gains over previous graph transformer (GT) and graph neural network (GNN) approaches, providing state-of-the-art (SOTA) results on inherently directed graphs.
Submission Length: Regular submission (no more than 12 pages of main content)
Video: https://www.youtube.com/watch?v=V7HGpuUPniw
Code: https://github.com/Qitong-Wang/Directed-Graph-Transformers
Assigned Action Editor: ~Giannis_Nikolentzos1
Submission Number: 2312
Loading