Keywords: Gradient Inversion, Federated Learning
Abstract: The main premise of federated learning is local clients could upload gradients instead of data during collaborative learning, hence preserving data privacy. But the development of gradient inversion method renders this premise under severe challenges: a third-party could still reconstruct the original training images through the uploaded gradients. While previous works are majorly conducted under relatively low-resolution images and small batch sizes, in this paper, we show that image reconstruction from complex datasets like ImageNet is still possible, even nested with large batch sizes and high resolutions. Success of the proposed method is built upon three key factors: a convolutional network to implicitly create an image prior, an over-parameterized network to guarantee the non-empty of the image generation and gradient matching, and a properly-designed architecture to create pixel intimacy. We conduct a series of practical experiments to demonstrate that the proposed algorithm can outperform SOTA algorithms and reconstruct the underlying original training images more effectively. Source code is available at: (to be released upon publication).
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: General Machine Learning (ie none of the above)
Supplementary Material: zip
1 Reply
Loading