Enhancing Graph Tasks with a Dual-Block Graph Transformer: A Synergistic Approach to Local and Global Attention

18 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Graph Transformer, Transformer, Graph Learning, Semi-supervised
Abstract: In this work, we address the limitations of traditional Transformers in graph tasks. While some approaches predominantly leverage local attention mechanisms akin to Graph Neural Networks (GNNs), often neglecting the global attention capabilities inherent in the Transformer model. Conversely, other methods overly focus on the global attention aspect of the Transformer, ignoring the importance of local attention mechanisms in the context of graph structure. To this end, we propose a novel Message Passing Transformer with strategic modifications to the original Transformer, significantly enhancing its performance on graph tasks by improving the handling of local attention. Building on this, we further propose a novel Dual-Block Graph Transformer that synergistically integrates local and global attention mechanisms. This architecture comprises two distinct blocks inside each head: the Message Passing Block, designed to emulate local attention, and a second block that encapsulates the global attention mechanism. This dual-block design inside each head enables our model to capture both fine-grained local and high-level global interactions in graph tasks, leading to a more comprehensive and robust graph representation. We empirically validate our model on node classification tasks, particularly on heterophilic graphs, and graph classification tasks. The results demonstrate that our Dual-Block Graph Transformer significantly outperforms both GNN and Graph Transformer models. Remarkably, this superior performance is achieved without the necessity for complex positional encoding strategies, underscoring the efficacy of our approach.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1123
Loading