Towards Efficient Replay in Federated Incremental Learning

Published: 29 Feb 2024, Last Modified: 24 May 2024The Thirty-Fifth IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2024)EveryoneCC BY 4.0
Abstract: In Federated Learning (FL), the data in each client is typically assumed fixed or static. However, data often comes in an incremental manner in real-world applications, where the data domain may increase dynamically. In this work, we study catastrophic forgetting with data hetero- geneity in Federated Incremental Learning (FIL) scenarios where edge clients may lack enough storage space to retain full data. We propose to employ a simple, generic frame- work for FIL named Re-Fed, which can coordinate each client to cache important samples for replay. More specif- ically, when a new task arrives, each client first caches se- lected previous samples based on their global and local im- portance. Then, the client trains the local model with both the cached samples and the samples from the new task. The- oretically, we analyze the ability of Re-Fed to discover im- portant samples for replay thus alleviating the catastrophic forgetting problem. Moreover, we empirically show that Re- Fed achieves competitive performance compared to state- of-the-art methods.
Loading