FCFL: A Fairness Compensation-Based Federated Learning Scheme with Accumulated Queues

Published: 2024, Last Modified: 25 Jan 2026ECML/PKDD (3) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The surge of ubiquitous data underscores the need for Federated learning (FL), which allows distributed data entities to learn a global model orchestrally without revealing their private local data, ensuring the privacy and security of users. However, the performance of the trained global model on individual clients is impaired by the heterogeneous nature of the client’s local data, exposed as the performance unfairness in FL. Such unfairness issues grab the research community’s attention and a few recent works embark upon fair solutions via reweighting clients during aggregation but overlooking the impact of client selection for aggregation. To fill this gap, in this paper, a Fairness Compensation-based FL scheme (FCFL) is proposed to alleviate the unfairness amongst clients. In particular, the unfairness of each client during the FL training process is estimated as the accuracy difference between local performance and global performance, and accumulated queues are calculated for the cumulative unfairness value in each round. In addition, a fairness compensation FL method is devised, which can select participating clients dynamically and adjust the aggregation weights adaptively in each round to guarantee fairness in the training process. Specifically, the proposed FCFL scheme is a flexible framework with tunable parameters and the FedAvg algorithm is its special case when \(\alpha =0\). Finally, intensive experiments are conducted on three benchmark datasets with different settings, demonstrating that the FCFL outperforms the state-of-the-art baselines by improving the fairness metric up to 30.4% while maintaining a competitive accuracy performance. The source code is available at https://github.com/wlffffff/FCFL.
Loading