Keywords: Certified Robustness, Differential Privacy, Federated Learning
Abstract: Federated learning (FL) provides an efficient training paradigm to jointly train a global model leveraging data from distributed users.
As the local training data comes from different users who may not be trustworthy, several studies have shown that FL is vulnerable to poisoning attacks where adversaries add malicious data during training. On the other hand, to protect the privacy of users, FL is usually trained in a differentially private way (DPFL). Given these properties of FL, in this paper, we aim to ask: Can we leverage the innate privacy property of DPFL to provide robustness certification against poisoning attacks? Can we further improve the privacy of FL to improve such certification?
To this end, we first investigate both the user-level and instance-level privacy of FL, and propose novel randomization mechanisms and analysis to achieve improved differential privacy.
We then provide two robustness certification criteria: certified prediction and certified attack cost for DPFL on both levels. Theoretically, given different privacy properties of DPFL, we prove their certified robustness under a bounded number of adversarial users or instances.
Empirically, we conduct extensive experiments to verify our theories under different attacks on a range of datasets. We show that the global model with a tighter privacy guarantee always provides stronger robustness certification in terms of the certified attack cost, while may exhibit tradeoffs regarding the certified prediction.
We believe our work will inspire future research of developing certifiably robust DPFL based on its inherent properties.
One-sentence Summary: We derive certified robustness in differentially private federated learning for free against poisoning attacks.
Supplementary Material: zip
21 Replies
Loading