Abstract: As a new distributed graph learning paradigm, Federated Graph Learning (FGL) facilitates collaborative model training across local systems while preserving data privacy, gaining rapid attention in graph-based AI.
We review existing FGL approaches and categorize their optimization mechanisms into:
(1) Server-Client (S-C), where clients upload local model parameters for server-side aggregation and global updates;
(2) Client-Client (C-C), which allows direct exchange of information between clients and customizing their local training process.
We reveal that C-C shows superior potential for FGL over S-C due to its refined communication structure and modeling capability of global graph knowledge, allowing each client to send customized information to other clients for fine-grained personalized local optimization.
However, existing C-C methods broadcast identical and redundant node representations, incurring high communication costs and privacy risks at the node level. To this end, we propose FedC4, which combines graph \underline{C}ondensation with \underline{C}-\underline{C} \underline{C}ollaboration optimization. Specifically, FedC4 employs graph condensation technique to refine the knowledge of each client's private graph into a few synthetic node embeddings instead of directly transmitting node-level knowledge, so as to achieve low-cost and high-privacy knowledge sharing among clients. Moreover, to achieve fine-grained personalized local optimization, FedC4 introduces three novel modules that allow the source client to send distinct node representations tailored to the target client's graph properties, thereby enhancing its local optimization with global features and topology. Experiments on eight public real-world datasets show that FedC4 outperforms state-of-the-art baselines in both task performance and communication cost.
Loading