Abstract: Federated learning offers multi-party collaborative training but also poses several potential security risks. These security issues have been studied more extensively in the context of basic image models, but it is relatively less explored in the field of graphs. Compared to various existing graph-based attack methods, the label-flipping attack does not need to change the graph structure and it is highly stealthy. Therefore, this paper explores a Graph Federated Label Flipping Attack (Graph-FLFA) and proposes a new malicious gradient computation strategy for federated graph models. The goal of this attack method is to maximally disrupt the classification results of specific nodes in the node classification task, without affecting the classification performance of other nodes. This strategy exhibits strong specificity and stealthiness, effectively balancing the influence of various labels and ensuring significant attack effects even when the poisoning ratio is very low. Extensive experiments on four benchmark datasets demonstrate that Graph-FLFA has a high attack success rate in different GNN-based models, achieving the most advanced attack performance. Furthermore, it has the capability to evade detection methods employed in defensive measures.
Loading