FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning

18 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Gradient Leakage Attacks, Privacy-Utility Trade-off
Abstract: Federated Learning (FL) enables collaborative model training across distributed clients without sharing local data, thus reducing privacy risks in decentralized systems. However, the exposure of gradients during training can lead to significant privacy leakage, particularly under gradient inversion attacks. To address this issue, we propose Federated Error Minimization (FedEM), an input-level defense framework that injects learnable perturbations into client data and jointly optimizes both the model and the perturbation generator. Unlike traditional Differential Privacy methods that modify gradients, FedEM achieves a stricter privacy-utility trade-off by perturbing inputs directly. We validate the effectiveness of FedEM through extensive experiments on benchmark datasets. For example, on MNIST, FedEM achieves only a 0.08\% decrease in accuracy compared to FedSGD, while significantly improving privacy metrics, with MSE improved by 46.2\% and SSIM reduced by 69.3\%. These results demonstrate that FedEM effectively mitigates gradient leakage attacks with minimal utility loss, providing a robust and scalable solution for privacy-preserving federated learning.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 11239
Loading