Fast Generation-Based Gradient Leakage Attacks: An Approach to Generate Training Data Directly From the Gradient
Abstract: Federated learning (FL) is a distributed machine learning technique that guarantees the privacy of user data. However, FL has been shown to be vulnerable to gradient leakage attacks (GLA), which have the ability to reconstruct private training data from public gradients with high probability. These attacks are either analytic-based, requiring modification of the FL model, or optimization-based, requiring long convergence times and failing to effectively address the challenge of dealing with highly compressed gradients in practical FL systems. This paper presents a pioneering generation-based GLA method called FGLA that can reconstruct batches of user data without the need for the optimization process. We specifically design a feature separation technique that first extracts the features of each sample in a batch and then directly generates the user data. Our extensive experiments on multiple image datasets show that FGLA can reconstruct user images in seconds with a batch size of 256 from highly compressed gradients (0.8% compression ratio or higher), thereby significantly outperforming state-of-the-art methods.
Loading