Optimal Defenses Against Data Reconstruction Attacks

Published: 10 Jun 2025, Last Modified: 13 Jul 2025DIG-BUG LongEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated learning, Data reconstruction, Bayesian C-R lower bound
TL;DR: A lower bound for reconstruction loss in federated learning is proposed, and is used to optimize gradient pruning and gradient noise.
Abstract: Federated Learning (FL) is designed to prevent data leakage through collaborative model train- ing without centralized data storage. However, it is vulnerable to reconstruction attacks that re- cover original training data from shared gradients. To optimize the trade-off between data leakage and utility loss, we first derive a theoretical lower bound of reconstruction error (among all attack- ers) for the two standard methods: adding noise, and gradient pruning. We then customize these two defenses to be parameter- and model-specific and achieve the optimal trade-off between our ob- tained reconstruction lower bound and model util- ity. Experimental results validate that our methods outperform Gradient Noise and Gradient Pruning by protecting the training data better while also achieving better utility.
Submission Number: 28
Loading