Abstract: This study presents the Task-level Heterogeneous Federated Learning (TH-FL), a novel paradigm that fuses the principles of Federated Learning (FL) and Multi-Task Learning (MTL). In the TH-FL scenario, each client can learn an indefinite number of tasks, which may vary in type and originate from distinct domains. We introduce a unique baseline model, BARTENDER, that integrates a Conditional Prompt (CP) module. This module encodes task-specific and domain-specific information, enabling the model to generate tailored outputs based on the encoding inputs. This innovative strategy not only minimizes the communication costs associated with FL but also enhances model generalization across a variety of task types. Through extensive experiments, we establish that the BARTENDER model surpasses traditional multi-decoder architecture models across diverse scenarios. We also explore the influence of the parameter decoupling strategy on model training and outline the assumptions necessary for achieving a $O\left( {1/\sqrt T } \right)$ convergence speed in the TH-FL scenario.
Loading