Fair Concurrent Training of Multiple Models in Federated Learning

Published: 2025, Last Modified: 22 Jan 2026IEEE Trans. Netw. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) enables collaborative learning across multiple clients. In most FL work, all clients train a single learning task. However, the recent proliferation of FL applications may increasingly require multiple FL tasks to be trained simultaneously, sharing clients’ computing resources, which we call Multiple-Model Federated Learning (MMFL). Current MMFL algorithms use naïve average-based client-task allocation schemes that often lead to unfair performance when FL tasks have heterogeneous difficulty levels, as the more difficult tasks may need more client participation to train effectively. Furthermore, in the MMFL setting, we face a further challenge that some clients may prefer training specific tasks to others, and may not even be willing to train other tasks, e.g., due to high computational costs, which may exacerbate unfairness in training outcomes across tasks. We address both challenges by firstly designing FedFairMMFL, a difficulty-aware algorithm that dynamically allocates clients to tasks in each training round, based on the tasks’ current performance levels. We provide guarantees on the resulting task fairness and FedFairMMFL’s convergence rate. We then propose novel auction designs that incentivizes clients to train multiple tasks, so as to fairly distribute clients’ training efforts across the tasks, and extend our convergence guarantees to this setting. We finally evaluate our algorithm with multiple sets of learning tasks on real world datasets, showing that our algorithm improves fairness by improving the final model accuracy and convergence speed of the worst performing tasks, while maintaining the average accuracy across tasks.
Loading