Abstract: Federated Learning (FL) allows multiple data owners to jointly train machine learning models by sharing local models instead of raw private data, alleviating data privacy concerns. However, as the local computation of data owners is unpredictable, it increases its vulnerability to Byzantine attacks, where compromised data owners submit abnormal local models that can severely degrade global model accuracy. Existing Byzantine-robust FL methods depend on a semi-honest server executing predefined Byzantine-robust aggregation rules (ByRules) to filter out abnormal local models, but these methods fail when the server is compromised. Although recent serverless Byzantine-robust FL approaches mitigate the risk of a compromised server, they suffer from challenges in achieving consensus on ByRules and impose a heavy burden on privacy protection. In this paper, we propose ROBY, a novel serverless FL framework that extends existing ByRules to a decentralized setting, effectively defending against Byzantine attacks and ensuring privacy protection for local models. ROBY introduces a shared, dynamically updated consensus dataset that serves as a reliable benchmark for applying ByRules and enabling efficient consensus on ByRules among decentralized data owners. Moreover, we design a dual-layer privacy shielding strategy in ROBY to protect local model privacy without sacrificing global model accuracy or incurring extra computational and communication overhead. Extensive evaluations demonstrate that ROBY substantially enhances both Byzantine robustness and privacy protection compared to server-based FL methods.
External IDs:doi:10.1109/tifs.2025.3589066
Loading