Abstract: Graph neural networks (GNNs) have shown significant success in modeling graph data, and Federated Graph Learning (FGL) empowers clients to collaboratively train GNNs in a distributed manner while preserving data privacy. However, FGL faces unique challenges when the general neighbor distribution pattern of nodes varies significantly across clients. Specifically, FGL methods usually require that the graph data owned by all clients is homophilic to ensure similar neighbor distribution patterns of nodes. Such an assumption ensures that the learned knowledge is consistent across the local models from all clients. Therefore, these local models can be properly aggregated as a global model without undermining the overall performance. Nevertheless, when the neighbor distribution patterns of nodes vary across different clients (e.g., when clients hold graphs with different levels of heterophily), their local models may gain different and even conflict knowledge from their node-level predictive tasks. Consequently, aggregating these local models usually leads to catastrophic performance deterioration on the global model. To address this challenge, we propose FedHERO, an FGL framework designed to harness and share insights from heterophilic graphs effectively. At the heart of FedHERO is a dual-channel GNN equipped with a structure learner, engineered to discern the structural knowledge encoded in the local graphs. With this specialized component, FedHERO enables the local model for each client to identify and learn patterns that are universally applicable across graphs with different patterns of node neighbor distributions. FedHERO not only enhances the performance of individual client models by leveraging both local and shared structural insights but also sets a new precedent in this field to effectively handle graph data with various node neighbor distribution patterns. We conduct extensive experiments to validate the superior performance of FedHERO against existing alternatives.
Submission Type: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We submitted the final version of our paper, including the requested minor revisions. Specifically, we made the following modifications to further enhance our work.
- We refined the Appendix F, highlighting the distinction between FedHERO and FedStar, especially the learnable latent-graph generator and per-layer dual-channel integration.
- We removed the typo in Eq.(4) for the correction and clarification of the top-k masking mechanism.
- We added Appendix E to provide runtime and memory measurements that support scalability claims.
- We added the rationale that the non-overlapping clients assumption serves only as an evaluation protocol at the end of Section 4.1 "Dataset" paragraph.
- We rewrote the Section 4.7, acknowledging the limitaiton of our privacy protection evidence. We clarified that LIA provides empirical but not formal privacy guarantees.
Code: https://github.com/Chen-1031/FedHERO
Assigned Action Editor: ~Ying_Wei1
Submission Number: 4766
Loading