Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge TransferDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Federated Learning, Poisoning attacks and defenses
Abstract: Collaborative (federated) learning enables multiple parties to train a global model without sharing their private data, notably through repeated sharing of the parameters of their local models. Despite its advantages, this approach has many known security and privacy weaknesses, and is limited to models with the same architectures. We argue that the core reason for such security and privacy issues is the naive exchange of high-dimensional model parameters in federated learning algorithms. This increases the malleability of the trained global model to poisoning attacks and exposes the sensitive local datasets of parties to inference attacks. We propose Cronus, a robust collaborative learning framework that supports heterogeneous model architectures. The simple yet effective idea behind designing Cronus is to significantly reduce the dimensions of the exchanged information between parties. This allows us to impose a very tight bound over the error of the aggregation algorithm in presence of adversarial updates from malicious parties. We implement this through a robust knowledge transfer protocol between the local models. We evaluate prior federated learning algorithms against poisoning attacks, and we show that Cronus is the only secure method that withstands the parameter poisoning attacks. Furthermore, treating local models as black-boxes significantly reduces the information leakage about their sensitive training data. We show this using membership inference attacks.
One-sentence Summary: We propose Cronus, a robust collaborative learning framework, to mitigate the susceptibility to poisoning attacks of existing federated learning algorithms.
5 Replies

Loading