Graph Attention Multi-layer PerceptronDownload PDF

29 Sept 2021 (modified: 22 Oct 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Graph Neural Network, Attention, Scalability
Abstract: Recently, graph neural networks (GNNs) have achieved a stride of success in many graph-based applications. However, most GNNs suffer from a critical issue: representation learned is constructed based on a fixed k-hop neighborhood and insensitive to individual needs for each node, which greatly hampers the performance of GNNs. To satisfy the unique needs of each node, we propose a new architecture -- Graph Attention Multi-Layer Perceptron (GAMLP). This architecture combines multi-scale knowledge and learns to capture the underlying correlations between different scales of knowledge with two novel attention mechanisms: Recursive attention and Jumping Knowledge (JK) attention. Instead of using node feature only, the knowledge within node labels is also exploited to reinforce the performance of GAMLP. Extensive experiments on 12 real-world datasets demonstrate that GAMLP achieves state-of-the-art performance while enjoying high scalability and efficiency.
One-sentence Summary: The first work to explore both node-adaptive feature and label propagation schemes for scalable GNNs.
Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2206.04355/code)
17 Replies

Loading