Keywords: Privacy preserving, Model inversion attack, Federated Learning
TL;DR: A method for improving privacy in federated learning by obfuscating sensitive data with adaptively synthesized concealed samples.
Abstract: Federated Learning (FL) is a distributed learning paradigm that promises to protect users’ privacy by not requiring the clients to share their raw and private data with the server. Despite the success, recent studies reveal the vulnerability of FL to model inversion attacks by showing that they can reconstruct users’ private data via eavesdropping on the shared gradient information. Most existing defence methods to preserve privacy in FL are formulated to protect all data samples equally, which in turn proven brittle against attacks and compromising the FL performance. In this paper, we argue that data containing sensitive information should take precedence. We present a simple, yet effective defence strategy that obfuscates the gradients of the sensitive data with concealed samples. In doing so, we propose to synthesize concealed samples to simulate the sensitive data at the gradient level. Furthermore, we employ a gradient projection technique to obscure sensitive data without compromising the quality of the shared gradients, hence enabling FL to retain its performance. Compared to the previous art, our empirical evaluations suggest that the proposed technique provides the strongest protection while simultaneously maintaining the FL performance. We also provide examples of how the proposed method can be combined with other defences to boost the privacy-performance trade-off even further.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
Supplementary Material: zip
5 Replies
Loading