Keywords: Gradient inversion attacks, Federated learning, AI security
Abstract: In the federated learning (FL) framework, clients participate in collaborative learning tasks under the coordination of a central server. Clients train local submodels using their own data and share gradients with the server, which aggregates the gradients to achieve privacy protection. However, recent research has revealed that gradient inversion attacks (GIAs) can leak private data from the shared gradients.
Prior work has only demonstrated the feasibility of recovering input data from gradients under highly restrictive conditions, such as when dealing with high-resolution face datasets, where GIAs often struggle to initiate attacks effectively, and on object datasets like Imagenet, where they encounter limitations, primarily manifested in their ability to handle only small batch sizes and high time costs.
As a result, we believe that implementing GIAs on high-resolution face datasets with large batch sizes is a challenging task. In this work, we introduce \textbf{F}ast \textbf{G}radient \textbf{L}eakage (FGL), which enables rapid image recovery across various network models on complex datasets, including the CelebA face dataset (1000 classes, 224$\times $224 px).
We also introduced StyleGAN as prior knowledge for images and achieved FGL with a batch size of 60 in experiments (constrained by experimental hardware).
We further propose a joint gradient matching loss, where multiple distinct matching losses collectively contribute to clarifying the attack direction and enhancing the efficiency of the optimization process.
Extensive experimentation validates the feasibility of our approach. We anticipate that our proposed method can serve as a valuable tool to advance the development of privacy defense techniques.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Supplementary Material: pdf
Submission Number: 3075
Loading