A Novel Model-Based Attribute Inference Attack in Federated LearningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: Federated learning, privacy risk, inference attack
Abstract: In federated learning, clients such as mobile devices or data silos (e.g. hospitals and banks) collaboratively improve a shared model, while maintaining their data locally. Multiple recent works show that client’s private information can still be disclosed to an adversary who just eavesdrops the messages exchanged between the targeted client and the server. In this paper, we propose a novel model-based attribute inference attack in federated learning which overcomes the limits of gradient-based ones. Furthermore, we provide an analytical lower-bound for the success of this attack. Empirical results using real world datasets confirm that our attribute inference attack works well for both regression and classification tasks. Moreover, we benchmark our novel attribute inference attack against the state-of-the-art attacks in federated learning. Our attack results in higher reconstruction accuracy especially when the clients’ datasets are heterogeneous (as is common in federated learning).
Is Student: Yes
4 Replies

Loading