Abstract: Federated learning (FL) is a distributed machine learning scheme that enables clients to train a shared global model without exchanging local data. In FL, the presence of label noise can severely reduce the accuracy of the global model. Although some recent works have focused on designing algorithms for label denoising, they ignored the important issue that clients may not apply costly label denoising strategies due to them being self-interested and having heterogeneous valuations on the model accuracy. To fill this gap, we model the clients’ strategic interactions as a novel label denoising game and determine the clients’ equilibrium strategies. We prove that the equilibrium outcome always leads to a lower global model accuracy than the socially optimal solution does. To motivate the clients’ efficient label denoising behaviors, we propose a penalty-based incentive mechanism and design the degree of penalty for punishing the clients’ undesired denoising behaviors, addressing the inaccurate noise rate detection in FL. We prove that our mechanism can achieve social efficiency, individual rationality, and weak budget balance. Numerical experiments on MNIST and CIFAR-10 show that as clients’ data become noisier, the gap between the equilibrium outcome and the socially optimal solution increases, verifying the necessity of an incentive mechanism. We empirically show that our proposed mechanism improves the model accuracy by up to 4.4% and incentivizes clients to achieve equilibrium strategies that are close to the socially optimal solution.
Loading