Understanding Robustness Against Gradient Inversion Attacks: A Flat Minima Perspective

16 Sept 2025 (modified: 18 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Gradient Inversion Attack, Privacy, Robustness
Abstract: Gradient Inversion Attacks (GIAs), which aim to reconstruct input data from its gradients, pose substantial risks of data leakage and challenges of data privacy in distributed learning systems, e.g., federated learning (FL). Nevertheless, existing defenses against GIA are mostly ad-hoc by relying on gradient modifications without a principle of when gradients are vulnerable to GIA and how we can fundamentally suppress the possibility of data leakage. We interpret GIA with the mutual information between the gradients $G$ and their data $X$, i.e., $I(X;G)$, which is revealed to be upper-bounded by the Hessian of loss. Based on the findings, we rethink the robustness against GIA for a flat minima searching-based FL algorithm, where it inherently suppresses Hessian values, thus minimizing $I(X;G)$. We extensively demonstrate that the gradients computed by searching flatter minima in the FL scenario achieve a substantial improvement in robustness against GIAs. Our work sheds light on novel benefits of flat minima searching, not only promoting better generalization but also hardening privacy in FL systems.
Primary Area: learning theory
Submission Number: 6501
Loading