Foreseeing Privacy Threats from Gradient Inversion Through the Lens of Angular Lipschitz SmoothnessDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: Federated Learning, Privacy Leakage, Gradient Inversion, Lipschitz Smoothness
TL;DR: We systematically re-evaluated recent gradient inversion attack methods on a broad spectrum of models and propose angular Lipschitz constant of gradient as a predictive measure for the model's vulnerability against the attack in federated learning.
Abstract: Recent works proposed server-side input recovery attacks in federated learning (FL), in which an honest-but-curious server can recover clients’ data (e.g., images) using shared model gradients, thus raising doubts regarding the safety of FL. However, the attack methods are typically demonstrated on only a few models or focus heavily on the reconstruction of a single image, which is easier than that of a batch (multiple images). Thus, in this study, we systematically re-evaluated state-of-the-art (SOTA) attack methods on a variety of models in the context of batch reconstruction. For a broad spectrum of models, we considered two types of model variations: implicit (i.e., without any change in architecture) and explicit (i.e., with architectural changes). Motivated by the re-evaluation results that the quality of reconstructed image batch differs per model, we propose angular Lipschitz constant of a model gradient function with respect to an input as a measure that explains the vulnerability of a model against input recovery attacks. The prototype of the proposed measure is derived from our theorem on the convergence of attackers’ gradient matching optimization, and re-designed into the scale-invariant form to prevent trivial server-side loss scaling trick. We demonstrated the predictability of the proposed measure on the vulnerability under recovery attacks by empirically showing its strong monotonic correlation with not only loss drop during gradient matching optimization but also the quality of the reconstructed image batch. We expect our measure to be a key factor for developing client-side defensive strategies against privacy threats in our proposed realistic FL setting called black-box setting, where the server deliberately conceals global model information from clients excluding model gradients.
Supplementary Material: zip
39 Replies

Loading