FedDEK: Federated Domain-Incremental Learning via Expert Knowledge Construction

Lu Liu, Juan Li, Tianzi Zang

Published: 01 Jan 2026, Last Modified: 16 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: The rapid expansion of applications of the Internet of Things (IoT) has resulted in a vast amount of distributed data across edge devices. Federated Learning (FL) enables collaborative model training without raw data exchange, making it well-suited for privacy-sensitive IoT environments. However, in domain-incremental scenarios with evolving data distributions, such as lighting shifts in traffic or seasonal changes in agriculture, FL models face catastrophic forgetting. To address this, we propose FedDEK, a domain-incremental FL framework based on a pre-trained Vision Transformer (ViT). FedDEK freezes the ViT backbone and incrementally incorporates domain-specific expert modules trained on clients. A global knowledge extractor is designed to select and aggregate relevant expert outputs during inference. We further introduce lightweight adaptation to reduce client communication cost, and apply expert knowledge transfer to accelerate convergence and enhance cross-domain generalization. Extensive experiments on two datasets in different settings demonstrate that FedDEK improves average accuracy, reduces forgetting, and lowers communication overhead across diverse task sequences, outperforming prior federated continual learning methods.
Loading