TL;DR: A client-centric generative replay approach in federated continual learning.
Abstract: Generative replay (GR) has been extensively validated in continual learning as a mechanism to synthesize data and replay past knowledge to mitigate forgetting.
By leveraging synthetic rather than real data for the replay, GR has been adopted in some federated continual learning (FCL) approaches to ensure the privacy of client-side data.
While existing GR-based FCL approaches have introduced improvements, none of their enhancements specifically take into account the unique characteristics of federated learning settings.
Beyond privacy constraints, what other fundamental aspects of federated learning should be explored in the context of FCL?
In this work, we explore the potential benefits that come from emphasizing the role of clients throughout the process.
We begin by highlighting two key observations: (a) Client Expertise Superiority, where clients, rather than the server, act as domain experts, and (b) Client Forgetting Variance, where heterogeneous data distributions across clients lead to varying levels of forgetting.
Building on these insights, we propose CAN (Clients As Navigators), highlighting the pivotal role of clients in both data synthesis and data replay.
Extensive evaluations demonstrate that this client-centric approach achieves state-of-the-art performance. Notably, it requires a smaller buffer size, reducing storage overhead and enhancing computational efficiency.
Lay Summary: Many real-world AI applications, like personalized assistants or healthcare devices, collect data from different users over time. One key problem is that AI models can forget earlier knowledge when learning from new data, much like a person forgetting old lessons when cramming for a new test. Our research proposes a new way to tackle this issue by making better use of the devices (or “clients”) that hold the data. Instead of relying only on a central server, our method called CAN (Clients As Navigators) treats each client as a local expert. These clients help generate training examples and decide which past knowledge should be reviewed, based on what they are likely to forget. This results in better learning for each client while keeping their data private. Our method helps AI systems remember better, even when the data is very different across users. It also reduces memory and computational costs, making it more practical for everyday use.
Primary Area: Social Aspects->Privacy
Keywords: Federated Learning, Continual Learning
Submission Number: 8667
Loading