Abstract: Recently, to promote private graph data sharing, a collaborative graph learning paradigm known as Graph Split Learning (GSL) is proposed. However, current security research about GSL focuses more on one-shot learning but ignores the fact that training models is usually an ongoing process in practice. Fresh data need to be added periodically to ensure the time-effectiveness of the trained model. In this paper, we propose the first attack against GSL, called Graph Update Leakage Attack (Gula), to show the vulnerability of GSL to privacy leakage attacks when running with updated training sets. Specifically, we systematically analyze the adversary’s knowledge of GSL from three dimensions, leading to 8 different implementations of Gula. All 8 attacks demonstrate that a malicious server in GSL can leverage the posteriors received during the forward computation stage to reconstruct the update graph data of clients. Extensive experiments on 6 real-world datasets and 8 different GNN models show that for GSL, our attacks can effectively reveal the private links and node features in the update set.
External IDs:dblp:conf/ica3pp/YangMLLYM24
Loading