everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
We study how one can capture directionality in graph transformers, for learning over directed graphs. Most existing graph transformers do not take edge direction into account. We therefore introduce a novel graph transformer architecture that explicitly takes into account the edge directionality. To achieve this, we make use of dual encodings to represent both potential roles, i.e., source or target, of each pair of vertices linked by a directed edge. These dual encodings are learned by leveraging the latent adjacency information extracted from a novel directional attention module, localized with $k$-hop neighborhood information. We also study alternative approaches to incorporating directionality into other graph transformers to enhance their performance on directed graph learning tasks. To evaluate the importance of edge direction, we empirically characterize via randomization whether direction really matters for the downstream task. We propose two new directional graph datasets where direction is intrinsically related to learning. Via experiments on directional graph datasets, we show that our approach yields state-of-the-art results.