Theoretically Understanding Data Reconstruction Leakage in Federated Learning

Published: 31 Jan 2026, Last Modified: 31 Jan 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning (FL) is a collaborative learning paradigm that aims to protect data privacy. Unfortunately, recent works show FL algorithms are vulnerable to data reconstruction attacks (DRA), a serious type of privacy leakage. However, existing works lack a theoretical foundation on to what extent the devices' data can be reconstructed and the effectiveness of these attacks cannot be compared fairly due to their unstable performance. To address this deficiency, we propose a theoretical framework to understand DRAs to FL. Our framework involves bounding the data reconstruction error and an attack's error bound reflects its inherent effectiveness using Lipschitz constant. We show that a smaller Lipschitz constant indicates a stronger attacker. Under the framework, we theoretically compare the effectiveness of existing attacks (such as DLG and iDLG). We then empirically examine our results on multiple datasets, validating that the iDLG attack inherently outperforms the DLG attack.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Jinghui_Chen1
Submission Number: 6374
Loading