Keywords: federated learning, fair resource allocation, alpha-fairness
Abstract: Federated learning involves jointly learning over massively distributed partitions of data generated on remote devices. Naively minimizing an aggregate loss function in such a network may disproportionately advantage or disadvantage some of the devices. In this work, we propose q-Fair Federated Learning (q-FFL), a novel optimization objective inspired by resource allocation strategies in wireless networks that encourages a more fair accuracy distribution across devices in federated networks. To solve q-FFL, we devise a scalable method, q-FedAvg, that can run in federated networks. We validate both the improved fairness and flexibility of q-FFL and the efficiency of q-FedAvg through simulations on federated datasets.
TL;DR: We propose a novel optimization objective that encourages fairness in heterogeneous federated networks, and develop a scalable method to solve it.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/fair-resource-allocation-in-federated/code)
0 Replies
Loading