everyone
since 13 Oct 2023">EveryoneRevisionsBibTeX
Conformal prediction has shown impressive capacity in constructing statistically rigorous prediction sets for machine learning models with exchangeable data samples. The burgeoning amount of large-scale data, coupled with the escalating privacy concerns related to local data sharing, has inspired recent innovations extending conformal prediction into federated environments with distributed data samples. However, this framework for distributed uncertainty quantification is susceptible to Byzantine failures. A minor subset of malicious clients can significantly compromise the practicality of coverage guarantees. To address this vulnerability, we introduce a novel algorithm Rob-FCP to execute robust federated conformal prediction, effectively countering malicious clients capable of reporting arbitrary statistics during the conformal calibration process. We theoretically provide the conformal coverage bound of Rob-FCP in the Byzantine setting and show that the coverage of Rob-FCP is asymptotically close to the desired coverage level under mild conditions in both IID and non-IID settings. We also propose a malicious client number estimator to tackle a more challenging setting where the number of malicious clients is unknown to the defender and theoretically show its effectiveness. We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks on five realistic benchmark and healthcare datasets.