Abstract: Federated learning (FL) is a distributed machine learning paradigm, which allows the training of machine learning models to be completed without leaving the local area. However, due to the distributed architecture of FL, it is vulnerable to reconstruction attacks and Byzantine attacks, in which reconstruction attacks are prone to make attackers recover original data from shared gradients while Byzantine attacks are able to dramatically drop the accuracy of the federated model by uploading manipulated local model update. To address these problems, some privacy-preserving robust FL schemes have been proposed. But these schemes are still unpractical in terms of heavy cryptographic operations and complex secure aggregation rules without optimization. Therefore, we proposed an efficient, privacy preserving and Byzantine-robust scheme EP-FLTrust to maintain robustness while preventing information leakage during FL process with lower latency and bandwidth than previous works. Specifically, we introduce a trust third party to customize several two party computation (2PC) protocols with optimizations and design a clipping function DReLU with only 1 bit storage, which help simplify the computation and communication complexity from $O$(dn2) to $O$(dn). We give the security proof of our scheme, and establish a performance evaluation test-bed. Our results shows that EP-FLTrust has the same robustness with state-of-the-art schemes, the computation time cost of EP-FLTrust is around 50X times less than state-of-the-art privacy-preserving schemes and the communication cost is around 10X times less than state-of-the-art privacy-preserving schemes.
Loading