DIRECTIONALITY IN GRAPH TRANSFORMERS

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: pdf
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: graph transformers, graph neural networks
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: We study how one can capture directionality in graph transformers, for learning over directed graphs. Most existing graph transformers do not take edge direction into account. We therefore introduce a novel graph transformer architecture that explicitly takes into account the edge directionality. To achieve this, we make use of dual encodings to represent both potential roles, i.e., source or target, of each pair of vertices linked by a directed edge. These dual encodings are learned by leveraging the latent adjacency information extracted from a novel directional attention module, localized with $k$-hop neighborhood information. We also study alternative approaches to incorporating directionality into other graph transformers to enhance their performance on directed graph learning tasks. To evaluate the importance of edge direction, we empirically characterize via randomization whether direction really matters for the downstream task. We propose two new directional graph datasets where direction is intrinsically related to learning. Via experiments on directional graph datasets, we show that our approach yields state-of-the-art results.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8547
Loading