HGAMLP: A Scalable Training Framework for Heterogeneous Graph Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: heterogeneous graph, graph neural networks, scalability
TL;DR: A new SOTA scalable HGNN model for large public heterogeneous graph dataset ogbn-mag.
Abstract: Heterogeneous graphs contain rich semantic information that can be exploited by heterogeneous graph neural networks (HGNNs). However, scaling HGNNs to large graphs is challenging due to the high computational cost. Existing scalable HGNNs use general subgraph construction method and mean aggregator to reduce the complexity. Despite their high scalability, they ignore two key characteristics of heterogeneous graphs, leading to low predictive performance. First, they adopt a fixed knowledge extractor during the local feature aggregation and the global semantic fusion of multiple meta-paths. Besides, they bury the graph structure information of the higher-order meta-paths and fail to fully leverage the higher-order global information. In this paper, we address these two limitations and propose a scalable HGNN framework called Heterogeneous Graph Attention Multi-Layer Perceptron (HGAMLP). Our framework employs a local multi-knowledge extractor to enhance the node representation, and leverages the de-redundancy mechanism to extract the pure graph structure information from higher-order meta-paths. Besides, it also adopts a node-adaptive weight adjustment mechanism to fuse the global knowledge from each local knowledge extractor. We evaluate our framework on five commonly used heterogeneous graph datasets and show that it outperforms the state-of-the-art baselines in both accuracy and speed. Notably, our framework achieves the best performance on the large public heterogeneous graph dataset (i.e., Ogbn-mag) of Open Graph Benchmark.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 4402
Loading