FedEditor: Efficient and Effective Federated Unlearning in Cooperative Intelligent Transportation Systems

Published: 01 Jan 2025, Last Modified: 01 Aug 2025IEEE Trans. Inf. Forensics Secur. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In cooperative intelligent transportation systems (CITS), federated learning enables vehicles to train a global model without sharing private data. However, the lack of an unlearning mechanism to remove the influence of vehicle-specified data from the global model potentially violates data protection regulations regarding the right to be forgotten. While the existing federated unlearning (FU) methods exhibit promising unlearning effects, their practicality in CITS is hindered due to the time-consuming retraining steps required by other vehicles and the non-negligible performance sacrifice on the un-forgotten data. Therefore, achieving effective unlearning without extensive retraining, while minimizing performance degradation on the un-forgotten data remains a challenge. In this work, we propose FedEditor, an efficient and effective FU framework in CITS that addresses the above challenge by reconfiguring the global model’s representation space to remove critical classification-related knowledge from the unlearned data. Firstly, FedEditor enables vehicles to perform the unlearning process locally on the global model, eliminating the participation of other vehicles and improving efficiency. Secondly, FedEditor captures and aligns the representations of the unlearned data with those of the nearest incorrect class centroid derived from non-training data, ensuring effective unlearning while preserving the un-forgotten data’s knowledge relatively intact for achieving competitive model performance. Finally, FedEditor refines the global model’s output distributions using the vehicles’ remaining data and incorporates a drift-mitigating regularization term, minimizing the negative impact of unlearning operations on model performance. Experimental results show that FedEditor reduces the unlearning rate by up to 99.64% without time-consuming retraining, while limiting the predictive performance loss of the resulting global model to less than 3.88% across five models and seven datasets.
Loading