Abstract: Federated Learning (FL) is an innovative framework that enables workers to collaboratively train a global shared model in a decentralized manner. Instead of transferring raw data to a centralized location, workers train the shared model locally. However, participating in federated learning tasks consumes communication resources and computing power and poses privacy risks. Naturally, workers are reluctant to engage in training without reasonable rewards. Moreover, there is a risk of malicious workers submitting harmful local models to undermine the global model and gain undeserved rewards. To tackle these challenges, we propose RIFL, which can fairly motivate honest workers to participate in FL tasks and prevent malicious workers from corrupting the global shared model. We employ centered kernel alignment (CKA) to assess the similarity between the local models submitted by workers and the global model. Subsequently, we utilize a similarity clustering-based approach to identify and eliminate local models submitted by potentially malicious workers. Additionally, a reward allocation mechanism based on reputation and data contribution is designed to motivate workers with high-quality data to participate in FL tasks and prevent intermittent attackers from gaining undeserved rewards. Finally, extensive experiments on benchmark datasets show that RIFL achieves high fairness and robustness, improving global model accuracy and motivating workers with high-quality data to participate in FL tasks under non-IID and unreliable scenarios.
External IDs:dblp:conf/icic/TangLO24
Loading