Information-Oriented Random Walks and Pipeline Optimization for Distributed Graph Embedding

Published: 01 Jan 2025, Last Modified: 30 Jan 2025IEEE Trans. Knowl. Data Eng. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Graph embedding maps graph nodes to low-dimensional vectors and is widely used in machine learning tasks. The increasing availability of billion-edge graphs underscores the importance of learning efficient and effective embeddings on large graphs, such as link prediction on Twitter with over one billion edges. Most existing graph embedding methods fall short of reaching high data scalability. In this paper, we present a general-purpose, distributed, information-centric random walk-based, and pipeline-optimized graph embedding framework, $\sf{DistGER-Pipe}$DistGER−Pipe, which scales to embed billion-edge graphs. $\sf{DistGER-Pipe}$DistGER−Pipe incrementally computes information-centric random walks to reduce redundant computations for more effective and efficient graph embedding. It further leverages a multi-proximity-aware, streaming, parallel graph partitioning strategy, simultaneously achieving high local partition quality and excellent workload balancing across machines. $\sf{DistGER-Pipe}$DistGER−Pipe also improves the distributed $\sf{Skip-Gram}$Skip−Gram learning model to generate node embeddings by optimizing access locality, CPU throughput, and synchronization efficiency. Finally, $\sf{DistGER-Pipe}$DistGER−Pipe designs pipelined execution that decouples the operators in sampling and training procedures with an inter-round serial and intra-round parallel processing, attaining optimal utilization of computing resources. Experiments on real-world graphs demonstrate that compared to state-of-the-art distributed graph embedding frameworks, including $\sf{KnightKing}$KnightKing, $\sf{DistDGL}$DistDGL, $\sf{Pytorch-BigGraph}$Pytorch−BigGraph, and $\sf{DistGER}$DistGER, $\sf{DistGER-Pipe}$DistGER−Pipe exhibits 3.15×–1053× acceleration, 45% reduction in cross-machines communication, >10% effectiveness improvement in downstream tasks, and 38% enhancement in CPU utilization.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview