Boosting Federated Model Convergence with Anomaly Detection and Exclusion

ICLR 2026 Conference Submission8039 Authors

16 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: federated learning, security, convergence, anomaly detection
Abstract: Federated Learning (FL) is becoming increasingly important in AI training, particularly for privacy-sensitive applications. At the same time, it has become a subject of malicious action and needs better protection against adversarial attacks causing data corruptions or other anomalies. In this work, we show that, in contradiction to a popular point of view, if properly introduced security enhancement does improve FL convergence and performance. Taking inspiration from the classical PID control theory, we develop a novel anomaly detection and exclusion approach. Unlike other aggregation techniques that rely solely on current round Euclidean distances between clients, we compute a PID-based history-aware score, which is used to detect anomalies that exceed a statistically defined threshold. Our adaptive exclusion mechanism removes the need for predefined attacker counts, and its server-side linear computational complexity of $O(nd)$ ensures its scalability and practical significance, while existing methods remain superlinear in complexity. We prove theoretically and experimentally verify faster convergence and computational efficiency on several benchmark datasets of various modalities, including non-iid scenarios and different model architectures such as CNNs and LLMs, and show that our method maintains effectiveness while boosting convergence. Our approach is generalizable across diverse task domains and aggregation methods, and is easily implementable in practice.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 8039
Loading