Client Evaluation and Revision in Federated Learning: Towards Defending Free-Riders and Promoting Fairness
Abstract: Federated learning (FL) is valuable to critical industries, yet it is susceptible to security threats. Specifically, FL scenarios that involve sensitive data are vulnerable to free-rider attacks. These attacks allow free-riders to obtain a global model trained on sensitive data without contributing to the training process. Current free-rider defenses are challenging for balancing defense effectiveness and participant model quality in heterogeneous data scenarios. In particular, the class of participants with less training sample variety is termed “Maverick”, and conventional free-rider defense method may misclassify Maverick as a free-rider or assign Maverick a poor quality model. In this paper, we propose the client evaluation and revision in federated learning (CERFL), which effectively identifies and prevents free-riders, quantifies participants’ contributions according to the parameters they upload. We introduce the concept of a parameter client contribution score (CCS) to quantitatively assess client involvement. CCS plays a dual role in dynamically regulating both the share of parameters contributed by clients during the model aggregation phase and the quality of models they receive during the model allocation phase. Additionally, CCS serves as a valuable metric for identifying potential free-riders among clients. CERFL assesses the participants by analyzing their performance on the validation dataset kept by server and their parameter structure in two consecutive rounds of updates. Our experimental results demonstrate that CERFL is able to improve Maverick’s model accuracy, contribute to a more equitable balance of fairness among participants, and effectively prevent free-riders from stealing the model.
Loading