Keywords: Byzantine robustness, distributed learning, secure aggregation
Abstract: Increasingly machine learning systems are being deployed to edge servers and devices (e.g. mobile phones) and trained in a collaborative manner. Such distributed/federated/decentralized training raises a number of concerns about the robustness, privacy, and security of the procedure. While extensive work has been done in tackling with robustness, privacy, or security individually, their combination has rarely been studied. In this paper, we propose a secure multi-server protocol that offers both input privacy and Byzantine-robustness. In addition, this protocol is communication-efficient, fault-tolerant, and enjoys local differential privacy.
One-sentence Summary: We propose a multi-server protocol that offers both input privacy and Byzantine-robustness and demonstrate it is communication-efficient, fault-tolerant, and enjoys local differential privacy.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Reviewed Version (pdf): https://openreview.net/references/pdf?id=YXlIOCMIxP
9 Replies
Loading