Federated Hierarchical Anti-Forgetting Framework for Class-Incremental Learning with Large Pre-Trained Models
Keywords: Pre-trained Model, Forgetting, Incremental Learning, Federated Learning
Abstract: Large pre-trained models, such as BERT, have demonstrated strong performance across various tasks. However, they are vulnerable to catastrophic forgetting in incremental learning, particularly in federated settings with non-IID data. Existing approaches, such as knowledge distillation and exemplar replay, partially address this issue but often incur high memory overhead, increase privacy risks, and introduce additional computational burden. To overcome these challenges, we propose FedHAF, a modular framework for federated class-incremental learning with large pre-trained models. FedHAF consists of three key components: a frozen feature extractor, a feature adjustment module, and a task-specific head. This structure enables efficient adaptation to new tasks while preserving knowledge from previous ones. We further introduce a two-stage training strategy that separates classifier learning from feature alignment. This strategy combines feature-level distillation with balance regularization, improving knowledge retention without requiring extensive parameter tuning or compromising privacy. Extensive experiments on benchmark datasets, including CIFAR-100, TinyImageNet, ImageNet, and Shakespeare, demonstrate that FedHAF consistently outperforms existing methods.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 16565
Loading