Keywords: federated learning, graph neural network, optimization, communication efficiency
Abstract: Graph neural networks (GNNs) have achieved great success in a wide variety of graph-based learning applications.
While distributed GNN training with sampling-based mini-batches expedites learning on large graphs, it is not applicable to geo-distributed data that must remain on-site to preserve privacy.
On the other hand, federated learning (FL) has been widely used to enable privacy-preserving training under data parallelism.
However, applying FL directly to GNNs either results in cross-client neighbor information loss or incurs expensive cross-client neighbor sampling and communication costs due to the large graph size and the dependencies between nodes among different clients.
To overcome these challenges, we propose a new federated graph learning (FGL) algorithmic framework called Swift-FedGNN that primarily performs efficient parallel local training and periodically conducts cross-client training.
Specifically, in Swift-FedGNN, each client *primarily* trains a local GNN model using only its local graph data, and some randomly sampled clients *periodically* learn the local GNN models based on their local graph data and the dependent nodes across clients.
We theoretically establish the convergence performance of Swift-FedGNN and show that it enjoys a convergence rate of $\mathcal{O}\left( T^{-1/2} \right)$, matching the state-of-the-art (SOTA) rate of sampling-based GNN methods, despite operating in the challenging FL setting.
Extensive experiments on real-world datasets show that Swift-FedGNN significantly outperforms the SOTA FGL approaches in terms of efficiency, while achieving comparable accuracy.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 15236
Loading