Keywords: Federated Graph Learning, Graph Neural Networks, Poisoning Attack
Abstract: Federated Graph Learning (FGL) enables collaborative training of Graph Neural Networks (GNNs) without sharing raw data. While prior work has focused on privacy leakage risks, we reveal a more direct and structurally embedded threat: a gray-box poisoning attack that manipulates shared neighbor representations during training. Specifically, we show that auxiliary-information-sharing frameworks, which collaboratively train node generators, create novel and powerful attack surfaces. A malicious client can poison shared gradients during generator updates, causing benign clients to unknowingly incorporate compromised synthetic nodes into their local subgraphs. These poisoned nodes propagate corrupted signals during training, resulting in persistent model degradation. We formalize this threat model, propose a stealthy optimization-based attack, and demonstrate its effectiveness in degrading model performance while evading standard defenses. The attack is highly stealthy, model-agnostic, and remains effective across diverse datasets, partition strategies, and numbers of clients. Our findings highlight a critical vulnerability in FGL systems and underscore the urgent need for dedicated robustness mechanisms against attacks targeting the data-generation layer.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 7428
Loading