Spatio-Temporal Gradient Matching for Federated Continual Learning

Published: 10 Jun 2025, Last Modified: 29 Jun 2025CFAgentic @ ICML'25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Continual Learning, Gradient Matching
TL;DR: This paper is about theory and algorithmic foundation for collaborative and federated agentic workflow.
Abstract: Federated Continual Learning (FCL) has emerged as an important research area, as data from distributed clients often arrives in a streaming manner and requires sequential learning. In this paper, we consider a more practical and challenging FCL setting where clients may have unrelated or even conflicting tasks. In such scenarios, statistical heterogeneity and data noise can lead to spurious correlations, biased feature learning, and severe catastrophic forgetting. Existing FCL methods often rely on generative replay to reconstruct previous tasks, but these approaches suffer from task divergence and forgetting themselves, which results in overfitting and degraded performance. To address these challenges, we propose a novel approach called Spatio-Temporal grAdient Matching with rehearsal dataPool (STAMP). Our key idea is to perform unified gradient matching across both the spatial and temporal dimensions of FCL. Spatial matching aligns gradients across clients at the same time step, while temporal matching aligns gradients across sequential tasks within each client. This dual perspective mitigates negative transfer and improves knowledge retention across diverse and evolving tasks. Extensive experiments show that STAMP outperforms existing FCL methods under heterogeneous conditions.
Submission Number: 24
Loading