Towards Accurate and Stronger Local Differential Privacy for Federated Learning with Staircase Randomized Response
Abstract: Federated Learning (FL), a privacy-preserving training approach, has proven to be effective, yet its vulnerability to attacks that extract information from model weights is widely recognized. To address such privacy concerns, Local Differential Privacy (LDP) has been applied to FL: perturbing the weights trained for the local model by each client. However, besides high utility loss on the randomized model weights, we identify a new inference attack to the existing LDP method, that can reconstruct the original value from the noisy values with high confidence. To mitigate these issues, in this paper, we propose the Staircase Randomized Response (SRR)-FL framework, which assigns higher probabilities to weights closer to the true weight, reducing the distance between the true and perturbed data. This minimizes the noise for maintaining the same LDP guarantee, leading to better utility. Compared to existing LDP mechanisms (e.g., Generalized Randomized Response) on the FL, SRR-FL can further provide a more accurate privacy-preserving training model, and enhance the robustness against the inference attack while ensuring the same LDP guarantee. Furthermore, we also use the parameter shuffling method for privacy amplification. The efficacy of SRR-FL has been validated on widely used datasets MNIST, Medical-MNIST and CIFAR-10, demonstrating remarkable performance. Code is available at https://github.com/matta-varun/SRR-FL.
Loading