Abstract: Federated learning (FL) enhanced by local differential privacy (LDP) has gained promising privacy-preserving capabilities against privacy attacks on local contributions. In this context, noise-discounting LDP methods have been widely investigated to provide better model performance and stronger privacy guarantees. However, prior art calibrate privacy guarantees by distinct LDP definitions, resulting in nonuniform privacy-preserving capabilities. In this article, aligned with the standard LDP definition, we proposed QP-LDP, a noise-discounting algorithm for FL, which can yield better model performance without any privacy loss. Specifically, QP-LDP precisely disturbs noncommon components of quantized local contributions, which are selected by an extended multiparty private set intersection process. In particular, QP-LDP can comprehensively protect two types of local contributions, i.e., local models and gradients for prevailing FedAvg and FedSGD, respectively. Through theoretical analysis, QP-LDP provides component-level indistinguishability for clients’ private local contributions and rigorous convergence guarantees for the global model. Extensive experiments on four widespread databases show that, compared to the standard LDP method, the global model prediction accuracy and convergence rate achieved by QP-LDP can be improved by up to 14.99% and 23.08%, respectively. More importantly, QP-LDP achieves the same level of privacy-preserving capabilities against privacy attacks as the standard LDP method.
Loading