Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning

Published: 19 Jun 2023, Last Modified: 21 Jul 2023FL-ICML 2023EveryoneRevisionsBibTeX
Keywords: Federated Continual Learning, Federated learning, Rehearsal-free Continual Learning, Prompt Learning
TL;DR: This work introduces Fed-CPrompt, an asynchronous-task federated continual learning framework that utilizes prompt learning to mitigate forgetting while preserving model adaptability for new tasks in a communication-efficient manner.
Abstract: Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients. This paper focuses on rehearsal-free FCL, which has severe forgetting issues when learning new tasks due to the lack of access to historical task data. To address this issue, we propose Fed-CPrompt based on prompt learning techniques to obtain task-specific prompts in a communication-efficient way. Fed-CPrompt introduces two key components, asynchronous prompt learning, and contrastive continual loss, to handle asynchronous task arrival and heterogeneous data distributions in FCL, respectively. Extensive experiments demonstrate the effectiveness of Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.
Submission Number: 72
Loading