Abstract: Federated learning (FL) is vulnerable to poisoning attacks, where malicious clients manipulate their updates to affect the global model. Although various methods exist for detecting such clients in FL, identifying malicious clients requires sufficient model updates, and hence by the time malicious clients are detected, FL models have already been poisoned. Thus, a method is needed to recover an accurate global model after malicious clients are identified. Current recovery methods rely on (i) all historical information from participating FL clients and (ii) the initial model unaffected by the malicious clients, both leading to a high demand for storage and computational resources. In this paper, we show that highly effective recovery can still be achieved based on 1) selective historical information rather than all historical information and 2) a historical model that has not been significantly affected by malicious clients rather than the initial model. In this scenario, we can accelerate the recovery speed and decrease memory consumption while maintaining comparable recovery performance. Following this concept, we introduce Crab (Certified Recovery from Poisoning Attacks and Breaches), an efficient and certified recovery method, which relies on selective information storage and adaptive model rollback. Theoretically, we demonstrate that the difference between the global model recovered by Crab and the one recovered by train-from-scratch can be bounded under certain assumptions. Our experiments, performed across four datasets with multiple machine learning models and aggregation methods, involving both untargeted and targeted poisoning attacks, demonstrate that Crab is not only accurate and efficient but also consistently outperforms previous approaches in recovery speed and memory consumption.
External IDs:doi:10.1109/tifs.2025.3533907
Loading