FedTAP: Federated Multi-Task Continual Learning via Dynamic Task-Aware Prototypes

19 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Multi-Task Learning, Continual Learning
TL;DR: We propose FedTAP, a unified framework for Federated Multi-Task Continual Learning that uses dynamic task-aware prototypes.
Abstract: Real-world federated learning systems often involve clients performing different tasks under continually changing conditions, including dynamic participation, where new tasks emerge while others fade away. However, this dynamic environment presents complexity beyond the scope of conventional approaches. Particularly, Federated Multi-Task Learning address different tasks but assumes they are static, while Federated Continual Learning only considers temporal data shifts within a single-task. To address this gap, we introduce a Federated Multi-Task Continual Learning (FMTCL), a novel scenario that simultaneously handles task heterogeneity, temporal data shifts, and dynamic task composition. We propose FedTAP, Federated Task-Aware Prototype, a prototype-based framework designed to solve the challenges of FMTCL. It consists of: (i) Prototype-Guided Aggregation (PGA), which aggregates client updates in a shared prototype space, (ii) Task-Aware Prototype Learning (TPL), which trains a diverse and sparsely utilized set of prototypes, and (iii) Adaptive Prototype Allocation (APA), which manages the prototype pool to adapt a dynamic task participation. FedTAP achieves state-of-the-art performance on multi-task benchmarks, demonstrating strong effectiveness in FMTCL.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 17737
Loading