RGLA: Reverse Gradient Leakage Attack using Inverted Cross-Entropy Loss Function

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Federated learning, Data reconstruction attack, Gradient leakage attack, Inverted Cross-Entropy loss function
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
TL;DR: We introduce RGLA, a novel gradient leakage attack that effectively track the challenge of high resolution and duplicate labels in gradients leakage attack, improving the data reconstruction performance and overall applicability.
Abstract: Federated learning (FL) has gained widespread adoption due to its ability to jointly train models by only uploading gradients while retaining data locally. Recent research has revealed that gradients can expose the private training data of the client. However, these recent attacks were either powerless against the gradient computed on high-resolution data of large batch size or often relied on the strict assumption that the adversary could control and ensure unique labels for each sample in the attacked batch. These unrealistic settings and assumptions create the illusion that data privacy is still protected in real-world FL training mechanisms. In this paper, we propose a novel gradient leakage attack named RGLA, which effectively recovers high-resolution data of large batch size from gradients while considering duplicate labels, making it applicable in realistic FL scenarios. The key to RGLA is to invert the cross-entropy loss function to obtain the model output corresponding to the private model inputs. Next, RGLA directly computes the feature map inputted into the last fully-connected layer leveraging the obtained model output. To our best acknowledge, this is the first successful disaggregation of the feature map in a generic FL setting. Finally, a previous generative feature inversion model is used to invert the feature map of each sample to model input space. Extensive experimental results demonstrate that RGLA can reconstruct 224$\times$224 pixels images with a batch size of 256 while considering duplicate labels. Our source code is available at https://github.com/AnonymousGitHub001/RGLA.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
Supplementary Material: zip
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5079
Loading