LocalDGP: local degree-balanced graph partitioning for lightweight GNNs

Published: 2025, Last Modified: 19 Jan 2026Appl. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph neural networks (GNNs) have been widely employed in various fields including knowledge graphs and social networks. When dealing with large-scale graphs, traditional full-batch training methods suffer from excessive GPU memory consumption. To solve this problem, subgraph sampling methods divide the graph into multiple subgraphs and then train the GNN on each subgraph sequentially, which can reduce GPU memory consumption. However, the existing graph partitioning algorithms (e.g., METIS) require global graph information before partitioning, and consume a significant amount of memory to store this information, which is detrimental for large-scale graph partitioning. Moreover, the GNN parameters in the subgraph sampling methods are shared among all the subgraphs. The structural differences between the subgraphs and the global graph (e.g., differences in node degree distributions) will produce a gradient bias on the subgraphs, resulting in a degradation of GNN accuracy. Therefore, a local degree-balanced graph partitioning algorithm named LocalDGP is proposed in this paper. First, in LocalDGP, only the local graph information is acquired during the partitioning process, which can reduce memory consumption. Second, the nodes are balancedly partitioned into subgraphs based on degree to ensure that the subgraph structure is consistent with the global graph. Extensive experimental results on four graph datasets demonstrate that LocalDGP can improve the accuracy of the GNNs while reducing memory consumption. The code is publicly available at https://github.com/li143yf/LocalDGP.
Loading