Abstract: Federated Learning (FL) enables geo-distributed clients to collaboratively train a learning model without exposing their private data. By only exposing local model parameters, FL well preserves data privacy of clients. Yet, it remains possible to recover raw samples from over frequently exposed parameters resulting in privacy leakage. Differentially private federated learning (DPFL) has recently been suggested to protect these parameters by introducing information noises. In this way, even if attackers get these parameters, they cannot exactly infer true parameters from these noisy information. Directly incorporating Differentially Private (DP) into FL however can severely affect model utility. In this article, we present an optimized sparse response mechanism (OSRM) that seamlessly incorporates DP into FL to diminish privacy budget consumption and improve model accuracy. Through OSRM, each FL client only exposes a selected set of large gradients, so as not to waste privacy budgets in protecting valueless gradients. We theoretically derive the convergence rate of DPFL with OSRM under non-convex loss. Then, OSRM is optimized by minimizing the loss of the convergence rate. Based on analysis, we present an effective algorithm for optimizing OSRM. Extensive experiments are conducted with public datasets, including MNIST, Fashion-MNIST and CIFAR-10. The results suggest that OSRM can achieve the average improvement of accuracy by 18.42% as compared to state-of-the-art baselines with a fixed privacy budget.
External IDs:dblp:journals/tdsc/MaZCG24
Loading