FedUP: Bridging Fairness and Efficiency in Cross-Silo Federated Learning

Published: 01 Jan 2024, Last Modified: 28 Jan 2025IEEE Trans. Serv. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although federated learning (FL) enables collaborative training across multiple data silos in a privacy-protected manner, naively minimizing the aggregated loss to facilitate an efficient federation may compromise its fairness. Many efforts have been devoted to maintaining similar average accuracy across clients by reweighing the loss function while clients’ potential contributions are largely ignored. This, however, is often detrimental since treating all clients equally will harm the interests of those clients with more contribution. To tackle this issue, we introduce utopian fairness to expound the relationship between individual earning and collaborative productivity, and propose Federated-UtoPia (FedUP), a novel FL framework that balances both efficient collaboration and fair aggregation. For the distributed collaboration, we model the training process among strategic clients as a supermodular game, which facilitates a rational incentive design through the optimal reward. As for the model aggregation, we design a weight attention mechanism to compute the fair aggregation weights by minimizing the performance bias among heterogeneous clients. Particularly, we utilize the alternating optimization theory to bridge the gap between collaboration efficiency and utopian fairness, and theoretically prove that FedUP has fair model performance with fast-rate training convergence. Extensive experiments using both synthetic and real datasets demonstrate the superiority of FedUP.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview