Fed Up with Complexity: Simplifying Many-Task Federated Learning with NTKFedAvg

Published: 05 Mar 2024, Last Modified: 04 May 2024PMLEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Multi-task, Many-task, Neural Tangent Kernel, NTK, Linearization, Communication Efficient, FedAvg
TL;DR: NTKFedAvg is a novel method for Many Task Federated Learning, improving privacy, efficiency, and task adaptability over FedAvg.
Abstract: Recent work has introduced the challenging setting of many-task federated learning (MaT-FL), which considers a scenario in which each client in a federated network may solve a separate learning task. Unfortunately, existing methods addressing MaT-FL, such as dynamic client grouping and split FL, increase privacy risks and computational demands by maintaining separate models for each client or task on the server. We introduce a novel baseline for MaT-FL, NTKFedAvg, that leverages a unified multi-task model on the server and the Neural Tangent Kernel (NTK) linearization to accommodate task heterogeneity without client or task-specific model adjustments on the server. This approach enhances privacy, reduces complexity, and improves resistance to various threats. Our evaluations on two MaT-FL benchmarks demonstrate that NTKFedAvg surpasses FedAvg in mIoU and accuracy, achieves faster convergence, is competitive with existing baselines, and excels in task unlearning in fewer rounds. This work not only proposes a more efficient and potentially privacy-preserving baseline for MaT-FL but also contributes to the understanding of task composition and weight disentanglement in FL, offering insights into the design of FL algorithms for environments characterized by significant task diversity.
Submission Number: 10
Loading