Keywords: Federated Learning, Graph Neural Network, Membership Inference, Graph Attack
TL;DR: We propose a novel membership inference attack tailored for federated GNNs, introducing an innovative attack perspective that has demonstrated promising effectiveness.
Abstract: Graph Neural Networks (GNNs) are increasingly integrated with federated learning (FL) to protect data locality in domains such as social networks, finance, and biology. While membership inference attacks (MIAs) have been widely studied in centralized GNNs, their scope in federated settings remains underexplored. We present CC-MIA, a framework that reformulates membership inference in federated GNNs into a cross-client attribution problem, where an adversarial client aims to determine not only whether a node was part of training but also which client owns it. CC-MIA operates under a realistic threat model: the adversary is a legitimate participant who observes global updates and can eavesdrop on other clients’ gradients, a well-studied vulnerability in recent gradient inversion attacks. To approximate the target data distribution, CC-MIA leverages publicly available shadow datasets from the same domain, consistent with established MIA practice. The attack combines shadow-based training for membership inference, gradient inversion to reconstruct client subgraphs, and prototype-based matching to assign nodes to clients. Experiments on six benchmark datasets and five federated schemes show that CC-MIA consistently outperforms strong MIA baselines, achieving up to 72.16\% improvement in inference accuracy. These results highlight that membership inference in federated GNNs naturally extends to client attribution, underscoring the need for defenses robust to gradient-level and client-level leakage. Codes are available at https://anonymous.4open.science/r/CC-MIA-54C3.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 8389
Loading