Abstract: Federated learning (FL) is a rapidly evolving paradigm that facilitates distributed training of large-scale deep neural networks (DNNs). However, the distributed nature exposes the system to threats from potentially malicious or low-quality participants, which can significantly degrade the overall performance of FL. Existing contribution evaluation approaches in previous FL studies are vulnerable when there exist complicated types of malicious or low-quality clients. In this article, we propose to assess the clients’ contributions by treating their model parameters as data. By extracting information from the statistical properties of model parameters using principal component analysis-based data mining techniques, we quantitatively estimate the similarity and diversity between different clients. Furthermore, we analyze the convergence of our proposed method and establish a convergence rate of $\mathcal {O}({1}/{T})$ with commonly accepted assumptions. Extensive experiments are conducted on public datasets to evaluate the effectiveness of our proposed method against typical malicious or low-quality clients: sybil-based backdoor attackers and clients with redundant data. Experimental results demonstrate the superiority of our approach in excluding malicious or low-quality clients and thereby enhancing the model performance in FL.
External IDs:dblp:journals/iotj/LiuLLMWLC25
Loading