Byzantine-Robust Federated Learning

Published: 2025, Last Modified: 07 Jan 2026ICCCN 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a transformative paradigm for training global machine learning models using decentralized datasets hosted by edge or client devices, without requiring centralized data aggregation. This makes FL particularly valuable for privacy-sensitive applications deployed in edge-cloud environments. However, FL is vulnerable to Byzantine attacks, where malicious clients provide falsified local model updates to compromise the performance of the global model. Such vulnerabilities are especially problematic in heterogeneous and resource-constrained edge-cloud systems, where ensuring trust across distributed clients is a significant challenge. This paper introduces a novel Byzantine-Robust FL method designed to address these challenges in edge-cloud computing scenarios. Our method leverages a central server equipped with a small clean dataset as an initial root of trust to evaluate the trustworthiness of client updates. Unlike prior approaches, our method iteratively builds a dynamic set of trusted clients, gradually incorporating their updates into the training process to refine the global model. This dynamic trust mechanism reduces reliance on the quality of the initial clean dataset, ensuring robustness even in the presence of a significant number of malicious clients. Extensive experiments on real-world datasets demonstrate the effectiveness of our approach in achieving high model accuracy and maintaining robustness against Byzantine attacks. Our method is particularly well-suited for edge-cloud environments, addressing critical challenges such as resource constraints, distributed learning management, and the reliability of collaborative AI systems.
Loading