Privacy-Preserving Robust Federated Learning with Distributed Differential PrivacyDownload PDFOpen Website

Published: 01 Jan 2022, Last Modified: 15 May 2023TrustCom 2022Readers: Everyone
Abstract: Federated Learning (FL) has attracted significant interest, as it provides a distributed machine learning paradigm to share data resources during model training process. However, sharing the gradients or model weights uploaded by clients or the final model aggregated by the server can lead to privacy disclosures and executing correctness issues. Specifically, the original data can be easily inferred through analyzing the shared gradients, and malicious users can disrupt the model aggregation to result in a destruction of the model accuracy. To address these issues, we propose a novel FL scheme with providing both privacy protection and robust aggregation. By using the distributed differential privacy and range proof technologies, the proposed scheme resists semi-honest servers and malicious users, while protecting the global model and providing the high accuracy. Both privacy analysis and experiments are given to demonstrate the effectiveness of our scheme.
0 Replies

Loading