Abstract: Federated learning is gaining significant interests as it enables model training over a large volume of data that is distributedly stored over many users. However, Malicious or dishonest aggregator still possible to infer sensitive information and even restore private data from local model updates even destroy the process of training. To solve the problem, researchers have proposed many excellent methods based on privacy protection technologies, such as secure multiparty computation (MPC), homomorphic encryption (HE), and differential privacy. But these methods don’t only ignore users’ address and identity privacy, but also include nothing about a feasible scheme to trace malicious users and malicious gradients. In this paper, we propose a general decentralized byzantine-fault-tolerant federated learning protocol, named TFPA, which can integrate multiple learning algorithms. This protocol can not only ensure the accuracy of aggregation under the adversary setting of \(4f+1\), but also provide user address privacy and identity privacy assurance. In addition, we also provide a heuristic malicious gradient discovery and tracking scheme to help participants better resist malicious gradients and ensure the fairness of aggregation to a certain extent. We evaluate our framework on Linear Regression, Logistic Regression, SVM, MLP and RNN, and attain good results both in accuracy and performance. Last but not least, we also simply prove the correction and security of TFPA.
Loading