Towards Federated Learning against Noisy Clients via CLIP-Guided Prototypes
Abstract: Federated Noisy Labels Learning (FNLL) allows global model to be jointly trained on multiple clients with varying degrees of noisy labels while preserving privacy, and despite recent research advances, distinguishing between client clean and noisy samples is still tricky since the distribution of labels among clients is always both noisy and class-imbalanced, leading to the poor performance of existing FNLL methods. To address this problem, we propose a novel framework called FedPN, the first framework to utilize Contrastive Language-Image Pre-training (CLIP) for federated noisy labels tasks. Then, to achieve higher performance for the global model, we introduce an attention based Prototype Adapter to identify more plausible local data for local model training, further improving training stability. We validate the effectiveness of FedPN by conducting extensive experiments on benchmark datasets under both Independently and Identically Distributed (IID) and Non-IID data partitions. The experimental results show that FedPN can effectively filter noisy samples from different clients, and compared with the state of-the-art FNLL method, the FedPN achieves at most and at least 8.39% and 0.88% performance improvement in the case of highly heterogeneous noisy labels.
Loading