Re-Fed+: A Better Replay Strategy for Federated Incremental Learning

Published: 01 Jan 2025, Last Modified: 17 Jul 2025IEEE Trans. Pattern Anal. Mach. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) has emerged as a significant distributed machine learning paradigm. It allows the training of a global model through user collaboration without the necessity of sharing their original data. Traditional FL generally assumes that each client’s data remains fixed or static. However, in real-world scenarios, data typically arrives incrementally, leading to a dynamically expanding data domain. In this study, we examine catastrophic forgetting within Federated Incremental Learning (FIL) and focus on the training resources, where edge clients may not have sufficient storage to keep all data or computational budget to implement complex algorithms designed for the server-based environment. We propose a general and low-cost framework for FIL named Re-Fed+, which is designed to help clients cache important samples for replay. Specifically, when a new task arrives, each client initially caches selected previous samples based on their global and local significance. The client then trains the local model using both the cached samples and the new task samples. From a theoretical perspective, we analyze how effectively Re-Fed+ can identify significant samples for replay to alleviate the catastrophic forgetting issue. Empirically, we show that Re-Fed+ achieves competitive performance compared to state-of-the-art methods.
Loading