Abstract: Zhang et al. (2023) recently proposed a secure federated learning (FL) scheme named LSFL to guarantee Byzantine robustness while protecting privacy in FL. In this work, we show that LSFL breaches privacy it claimed. Specifically, we demonstrate that the secure Byzantine robustness procedure of LSFL exposes significant information of all participant models and data to a semi-honest server, thereby damaging privacy. Then, we analyze the reason for this security issue and give a suggestion to prevent privacy breaches in LSFL.
Loading