Rethinking Node-wise Propagation for Large-scale Graph Learning

Published: 01 Jan 2024, Last Modified: 15 Oct 2025WWW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Scalable graph neural networks (GNNs) have emerged as a promising technique, which exhibits superior predictive performance and high running efficiency across numerous large-scale graph-based web applications. However, (i) Most scalable GNNs tend to treat all nodes with the same propagation rules, neglecting their topological uniqueness; (ii) Existing node-wise propagation optimization strategies are insufficient on web-scale graphs with intricate topology, where a full portrayal of nodes' local properties is required. Intuitively, different nodes in web-scale graphs possess distinct topological roles, and therefore propagating them indiscriminately or neglecting local contexts may compromise the quality of node representations. To address the above issues, we propose Adaptive Topology-aware Propagation (ATP), which reduces potential high-bias propagation and extracts structural patterns of each node in a scalable manner to improve running efficiency and predictive performance. Remarkably, ATP is crafted to be a plug-and-play node-wise propagation optimization strategy, allowing for offline execution independent of the graph learning process in a new perspective. Therefore, this approach can be seamlessly integrated into most scalable GNNs while remaining orthogonal to existing node-wise propagation optimization strategies. Extensive experiments on 12 datasets have demonstrated the effectiveness of ATP.
Loading