Theoretically Understanding Data Reconstruction Leakage in Federated Learning

23 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: societal considerations including fairness, safety, privacy
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Privacy leakage, model reconstruction attacks, federated learning
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Federated learning is an emerging collaborative learning paradigm that aims to protect data privacy. Unfortunately, recent works show that federated learning algorithms are vulnerable to data reconstruction attacks, and a series of follow-up works are proposed to enhance the attack effectiveness. However, existing works lack of a theoretical understanding on to what extent the devices' data can be reconstructed and the effectiveness of these attacks cannot be compared theoretically. To address it, we propose a theoretical framework to understand data reconstruction attacks to FL. Our framework involves bounding the data reconstruction error and an attack's error bound reflects its inherent attack effectiveness. Under the framework, we can theoretically compare the effectiveness of existing attacks. For instance, our experimental results on multiple datasets validate that the iDLG data reconstruction attack inherently outperforms the DLG attack.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6599
Loading