HePCo: Data-Free Heterogeneous Prompt Consolidation for Continual Federated Learning

Published: 28 Oct 2023, Last Modified: 02 Apr 2024DistShift 2023 PosterEveryoneRevisionsBibTeX
Keywords: continual learning, federated learning, prompt tuning, foundation models, robustness
TL;DR: We propose a prompt tuning and aggregation scheme leveraging foundation models and a lightweight data-free distillation mechanism to tackle forgetting and heterogeneity in continual federated learning
Abstract: In this paper, we focus on the important yet understudied problem of Continual Federated Learning (CFL), where a server communicates with a set of clients to incrementally learn new concepts over time without sharing or storing any data. The complexity of this problem is compounded by challenges from both the Continual and Federated Learning perspectives. Specifically, models trained in a CFL setup suffer from catastrophic forgetting which is exacerbated by data heterogeneity across clients. Existing attempts at this problem tend to impose large overheads on clients and communication channels or require access to stored data which renders them unsuitable for real-world use due to privacy. We study this problem in the context of Foundation Models and showcase their effectiveness in mitigating forgetting while minimizing overhead costs and without requiring access to any stored data. We achieve this by leveraging a prompting based approach and proposing a novel and lightweight generation and distillation scheme to aggregate client models at the server. Our approach outperforms both existing methods and our own baselines by more than 7\% on challenging image-classification benchmarks while significantly reducing communication and client-level computation costs.
Submission Number: 60
Loading