Keywords: Graph Learning, Federated Learning
Abstract: Federated graph learning (FGL) has rapidly gained prominence as a privacy-preserving collaborative paradigm. However, the increasing prevalence of backdoor attacks presents significant challenges to federated systems. These attacks rely on the injection of carefully crafted triggers that lead to erroneous predictions. Recent research has shown that the diversity of trigger structures and injection locations in FGL diminishes the effectiveness of traditional federated defense methods. Notably, existing defense strategies for FGL have yet to fully exploit the unique topological structures of graphs, highlighting opportunities for improvement in countering these attacks.
To this end, we propose a tailored topology- and distribution-aware backdoor defense against federated graph learning method (FedTD). At the client level, we introduce an energy function to integrate the underlying data distribution into the local model, assigning low energy to benign clients and high energy to malicious clients. By combining topological features with the energy function, we establish a more comprehensive energy estimation. At the server level, we construct a virtual graph based on estimation of each client to evaluate the maliciousness score of each client. The homophily level of each local graph is considered to ensure the reliability of the virtual graph. During aggregation, we assign lower weights to clients with high malicious scores and higher weights to clients with low malicious scores, thus achieving a more robust FGL. FedTD remains robust under both small and large malicious client ratios. Extensive results across various federated graph scenarios under backdoor attacks validate the effectiveness of FedTD.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 5565
Loading