Abstract: Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients tamper their model parameters to deteriorate the global model. Existing methods for defending against poisoning attacks primarily rely on identifying malicious clients, but struggle to balance robustness and efficiency. To address these issues, we propose FedPTA, a Prior-based Tensor Approximation (PTA) method. The core idea of FedPTA is to detect malicious clients in federated learning by leveraging inherent priors. This method initially innovatively defines multi-round model parameters as a three-dimensional tensor and unfolds it along different dimensions. Subsequently, three inherent priors - the similarity among benign clients, the continuity of multi-round client model parameters and the sparsity of malicious parameters, are integrated into a convex optimization framework. Through the optimization process, the optimal solutions for the background tensor and anomaly tensor are solved. Ultimately, the anomaly tensor is used to highlight the element-level features of malicious parameters, effectively distinguishing malicious clients. Evaluative studies supported by theoretical significance demonstrate the effectiveness of FedPTA, outperforming current state-of-the-art methods in terms of detection accuracy and computational efficiency.
External IDs:dblp:journals/tifs/MuCLZGS24
Loading