Abstract: Federated learning (FL) enables collaborative learning among multiple clients to obtain an optimal global model. Since the FL server’s limitation on the number of clients per training round, client selection has emerged as a critical research issue. The existing strategies for the selection among clients primarily concentrate on either ensuring fairness or optimizing performance in attacker-free FL systems, ignoring the disruption caused by Byzantine attacks. In this work, we propose FBR-FL, a fair client selection scheme that tolerates Byzantine attacks. To catch abnormal geometry among the models of FL clients under attacks, FBR-FL projects local model updates into manifold space and employs geodesic distance to assess similarity on Riemannian geometry. Moreover, to achieve fairness under attacks, the selection among clients is framed into an improved Lyapunov optimization problem with penalty rules, such that we can dynamically adjust FL clients’ selection probabilities based on their reputations and contributions. Our extensive experiments demonstrate that FBR-FL ensures fair selection of clients under various attacks while maintaining accuracy comparable to FedAvg. In the unreliable scenario containing attackers, FBR-FL achieves \(18.59\%\) higher Jain’s Fairness Index (JFI) than the state-of-the-art client selection scheme. Our code and supplementary material is available at https://github.com/DataMining-Lab/FBR-FL.git.
External IDs:dblp:conf/prcv/ZhangLLSS24
Loading