Client-Private Secure Aggregation for Privacy-Preserving Federated LearningDownload PDF

Published: 21 Oct 2022, Last Modified: 05 May 2023FL-NeurIPS 2022 PosterReaders: Everyone
Keywords: federated learning, privacy-preserving federated learning, secure aggregation, homomorphic encryption, secure multi-party computation, differential privacy
TL;DR: In this work, we construct a secure aggregation protocol for privacy-preserving federated learning which enforces privacy of each client's training data against all parties, and hides the global model from the server.
Abstract: Privacy-preserving federated learning (PPFL) is a paradigm of distributed privacy-preserving machine learning training in which a set of clients, each holding siloed training data, jointly compute a shared global model under the orchestration of an aggregation server. The system has the property that no party learns any information about any client's training data, besides what could be inferred from the global model. The core cryptographic component of a PPFL scheme is the secure aggregation protocol, a secure multi-party computation protocol in which the server securely aggregates the clients' locally trained models into an aggregated global model, which it distributes to the clients. However, in many applications the global model represents a trade secret of the consortium of clients, which they may not wish to reveal in the clear to the server. In this work, we propose a novel model of secure aggregation, called client-private secure aggregation (CPSA), in which the server computes an encrypted global model which only the clients can decrypt. We provide three explicit constructions of CPSA which exhibit varying trade-offs. We also conduct experimental results to demonstrate the practicality of our constructions in the cross-silo setting when scaled to 250 clients.
Is Student: No
4 Replies

Loading