QP-LDP for better global model performance in federated learning

12 May 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: With the deployment of local differential privacy (LDP), federated learning (FL) has gained stronger privacy-preserving capability against inference-type attacks. However, existing LDP methods reduce global model performance. In this paper, we propose a QP-LDP algorithm for FL to obtain a better-performed global model without losing privacy guarantees defined by the original LDP. Different from previous LDP methods for FL, QP-LDP improves the global model performance by precisely disturbing the non-common components of quantized local contributions. In addition, QP-LDP comprehensively protects two types of local contributions. Through security analysis, QP-LDP provides the probability indistinguishability of clients' private local contributions at a component-level. More importantly, ingenious experiments show that with the deployment of QP-LDP, the global model outperforms that in the original LDP-based FL in terms of prediction accuracy and convergence rate.
0 Replies

Loading