Federated Learning Vulnerabilities: Privacy Attacks with Denoising Diffusion Probabilistic Models

Published: 01 Jan 2024, Last Modified: 29 Sept 2024WWW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federal Learning (FL) is highly respected for protecting data privacy in a distributed environment. However, the correlation between the updated gradient and the training data opens up the possibility of data reconstruction for malicious attackers, thus threatening the basic privacy requirements of FL. Previous research on such attacks mainly focuses on two main perspectives: one exclusively relies on gradient attacks, which performs well on small-scale data but falter with large-scale data; the other incorporates images prior but faces practical implementation challenges. So far, the effectiveness of privacy leakage attacks in FL is still far from satisfactory. In this paper, we introduce the Gradient Guided Diffusion Model (GGDM), a novel learning-free approach based on a pre-trained unconditional Denoising Diffusion Probabilistic Models (DDPM), aimed at improving the effectiveness and reducing the difficulty of implementing gradient based privacy attacks on complex networks and high-resolution images. To the best of our knowledge, this is the first work to employ the DDPM for privacy leakage attacks of FL. GGDM capitalizes on the unique nature of gradients and guides DDPM to ensure that reconstructed images closely mirror the original data. In addition, in GGDM, we elegantly combine the gradient similarity function with the Stochastic Differential Equation (SDE) to guide the DDPM sampling process based on theoretical analysis, and further reveal the impact of common similarity functions on data reconstruction. Extensive evaluation results demonstrate the excellent generalization ability of GGDM. Specifically, compared with state-of-the-art methods, GGDM shows clear superiority in both quantitative metrics and visualization, significantly enhancing the reconstruction quality of privacy attacks.
Loading