Keywords: Privacy-Preserving, Federated Learning, Homomorphic Encryption, Secure Clustering, Model Poisoning Attack
TL;DR: This paper proposes SCFL, a privacy-preserving federated learning model that uses secure clustering and cosine similarity to defend against model poisoning attacks while improving efficiency and robustness for both IID and non-IID data.
Abstract: Federated learning(FL) has been developed as a distributed machine learning which utilizes data distributed across various terminals. In FL, as the gradient of each client is shared, privacy leakage problems arise, which have led to the development of privacy-preserving federated learning(PPFL) as an emerging secure FL approach. Nevertheless, current PPFL methods, particularly for nonIID data, encounter challenges such as heavy computation and communication overhead. In addition, they are susceptible to model poisoning attacks. To address this problem, we design a cosine similarity-based defense strategy for DBSCAN clustering and a secure federated learning model based on secure clustering (SCFL), which resists encrypted model poisoning attacks and protects privacy. We first construct KD-tree based on the cosine similarity between local gradients. The DBSCAN clustering based on the KD-tree acceleration can be applied to detect malicious gradients and address data heterogeneity. We propose Byzantine-tolerance aggregation using cosine similarity, and we show this technique achieves robustness for both IID and nonIID data settings. The experiment results show that SCFL outperforms the prevailing defense strategy such as ShieldFL in terms that SCFL reduces communication cost by about 50% and encryption and decryption runtime by about 80%, and achieves 5%-10% accuracy improvement.
Submission Number: 53
Loading