Integrating commonality and individuality for graph federated learning: A graph spectrum perspective

Published: 25 Jan 2026, Last Modified: 25 Jan 2026TechRxiv 2025EveryoneCC BY-NC-ND 4.0
Abstract: Real-world graphs are usually distributed across multiple organizations. Due to security regulations and privacy protection, distributed graphs are not allowed to be collected together and trained centrally. Therefore, Graph Federated Learning (GFL) has been proposed to carry out distributed collaborative graph representation learning while protecting the privacy of graph data. However, existing methods suffer from heterogeneity (i.e., non-independent and identically distributed graph data across different clients), which usually brings about unstable training, slow convergence, and degraded performance. Inspired by the idea of ‘seeking common ground while preserving individual differences’ in Personalized Federated Learning (PFL), we propose a novel graph Federated learning method by Integrating Commonality and Individuality (FedICI), which maximizes the consistency of commonalities across clients while minimizing the correlation of client-specific individualities among clients. Specifically, a spectral graph neural network with homophily bases and heterophily bases is employed to extract low-frequency components and high-frequency components from graph data on each client, where low-frequency and highfrequency components represent stable and discriminative signals of a client, respectively. On one hand, with the goal of maximizing the consistency of commonalities across different clients, common graph patterns are extracted from the low-frequency components on server side. On the other hand, with the aim of minimizing the correlation of individualities across heterogeneous clients, clientspecific graph patterns carried by high-frequency components are encouraged to be orthogonalized. Theoretically, we prove that our proposed FedICI achieves stable and fast convergence, where the convergence error decreases in proportion to the inverse square root of the number of local training epochs. Empirically, we conduct extensive experiments on six homophilic and five heterophilic graph datasets under both non-overlapping and overlapping settings. Experimental results validate the superiority of our method over eleven state-of-the-art methods. Notably, our FedICI outperforms the second-best method by an average margin of 3.36% on all heterophilic datasets, while also achieving more than threefold improvement in efficiency.
Loading