Abstract: The proliferation of end devices has led to a distributed computing paradigm, wherein on-device machine learning models continuously process diverse data generated by these devices. The dynamic nature of this data, characterized by continuous changes or data drift, poses significant challenges for on-device models. To address this issue, continual learning (CL) is proposed, enabling machine learning models to incrementally update their knowledge and mitigate catastrophic forgetting. However, the traditional centralized approach to CL
is unsuitable for end devices due to privacy and data volume concerns. In this context, federated CL (FCL) emerges as a promising solution, preserving user data locally while enhancing models through collaborative updates. Aiming at the challenges of limited storage resources for CL, poor autonomy in task shift detection, and difficulty in coping with new adversarial tasks in the FCL scenario, we propose a novel FCL framework named self-adaptive federated CL (SacFL). SacFL employs an encoder–decoder architecture to separate task-robust and task sensitive components, significantly reducing storage demands by retaining lightweight task-sensitive components for resource constrained end devices. Moreover, SacFL leverages contrastive learning to introduce an autonomous data shift detection mech
anism, enabling it to discern whether a new task has emerged and whether it is a benign task. This capability ultimately allows
the device to autonomously trigger CL or attack defense strategy without additional information, which is more practical for end
devices. Comprehensive experiments conducted on multiple text and image datasets, such as Cifar100 and THUCNews, have
validated the effectiveness of SacFL in both class-incremental and domain-incremental scenarios. Furthermore, a demo system
has been developed to verify its practicality.
Loading