PI-Fed: Continual Federated Learning With Parameter-Level Importance Aggregation

Published: 01 Jan 2024, Last Modified: 19 May 2025IEEE Internet Things J. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) has drawn much attention for distributed system over the Internet of Things (IoT), since it enables collaborative machine learning on heterogeneous devices while resolves concerns about privacy leakage. Due to the catastrophic forgetting (CF) phenomenon of optimization methods, existing FL approaches are restricted to single task learning and typically assume that data from all nodes are simultaneously available during training. However, in practical IoT scenarios, the data preparation from nodes may be asynchronous, and different tasks require incremental training. To address the issues, we propose a continual FL (CFL) framework with parameter-level importance aggregation (PI-Fed), which supports collaborative task-incremental learning with privacy preservation. Specifically, PI-Fed evaluates the importance of each parameter in the global model to all history tasks, which is computed locally and aggregated at the center server. Then the server performs soft-masking on the averaged gradient collected from local clients based on the parameter importance. By minimizing the change on important parameters, PI-Fed effectively overcomes CF and also achieves high efficiency without experience replay. Extensive experiments on 4 benchmarks with at most 20 sequential tasks demonstrate that our proposed PI-Fed significantly outperforms traditional FL baselines (FedAvg, FedNova, and SCAFFOLD).
Loading