Differentially Private Federated Learning: A Client Level PerspectiveDownload PDF

27 Sept 2018 (modified: 21 Apr 2024)ICLR 2019 Conference Blind SubmissionReaders: Everyone
Abstract: Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance.
Keywords: Machine Learning, Federated Learning, Privacy, Security, Differential Privacy
TL;DR: Ensuring that models learned in federated fashion do not reveal a client's participation.
Code: [![github](/images/github_icon.svg) cyrusgeyer/DiffPrivate_FedLearning](https://github.com/cyrusgeyer/DiffPrivate_FedLearning) + [![Papers with Code](/images/pwc_icon.svg) 4 community implementations](https://paperswithcode.com/paper/?openreview=SkVRTj0cYQ)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 5 code implementations](https://www.catalyzex.com/paper/arxiv:1712.07557/code)
10 Replies

Loading