Adaptive Federated Continual Learning for Heterogeneous Edge Environments: A Data-Free Distillation Approach

Published: 2024, Last Modified: 15 May 2025GLOBECOM 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, Federated Learning (FL) has revolutionized the processing and analysis of vast volumes of data generated by wireless devices, effectively overcoming the traditional cloud computing constraints within Internet of Things (IoT) networks. However, practical challenges arise as data on edge devices dynamically changes, necessitating continuous learning capabilities known as Federated Continual Learning (FCL). One key challenge in FCL is the issue of catastrophic forgetting, which refers to preserving the training performance on old data while training on new data. While common strategies involve retaining a subset of old data to mitigate the issue, privacy concerns limit this approach, and the balance between emphasis on new and old data during the training process remains inadequately studied. To address the above challenges, we propose an Adaptive Federated Continual Learning (AdapFCL) method in heterogeneous environment, which eliminates the need for episodic memory in federated settings. Specifically, the server employs a Deep Convolutional Generative Adversarial Network (DCGAN) model with a data-free knowledge distillation technique, which enables the server to learn representations of old data and generate synthetic data involving only global model. Then clients perform local training by utilizing new data and synthetic data instead of storing old data. Furthermore, we quantify the degree of forgetting on old data for each client, allowing for adaptive adjustment of emphasis weights for old and new data during the training process. Simulation results validate that the proposed method can achieve superior average test accuracy while maintaining communication efficiency compared with baselines, especially in highly heterogeneous data scenarios.
Loading