Prioritized Federated Learning: Leveraging Non-Priority Clients for Targeted Model Improvement

TMLR Paper2615 Authors

02 May 2024 (modified: 09 Jul 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data, without the need for clients to share their data. Many existing FL approaches assume that all clients have equal importance and construct a global objective based on all clients. We consider a version of FL we call Prioritized FL, where the goal is to learn a weighted mean objective of a subset of clients, designated as priority clients. An important question arises: How do we choose well-aligned non-priority clients to participate in the federation, while discarding misaligned clients? We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge. The algorithm employs a matching strategy that chooses non-priority clients based on how similar the model’s loss is on their data compared to the global data, thereby ensuring the use of non-priority client gradients only when it is beneficial for priority clients. This approach ensures mutual benefits as non-priority clients are motivated to join when the model performs satisfactorily on their data, and priority clients can utilize their updates and computational resources when their goals align. We present a convergence analysis that quantifies the trade-off between client selection and speed of convergence. Our algorithm shows faster convergence and higher test accuracy than baselines for various synthetic and benchmark datasets.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: Highlighted all changes in blue.
Assigned Action Editor: ~Philip_K._Chan1
Submission Number: 2615
Loading