Abstract: Federated Learning (FL) supports collaborative training of Machine Learning (ML) models across multiple parties. Within the FL framework, to comply with data protection regulations and mitigate the influence of malicious clients, Federated Unlearning (FU) emerges as a viable solution, enabling clients to remove their private data from the system within a reasonable time. However, existing FU methods often involve the participation of innocent clients in the unlearning process, presenting challenges such as privacy leakage and high learning costs. Our goal is to propose an FU method that reduces the involvement of innocent clients and protects privacy. In this paper, we present DIsFU, which includes two parts: data impressions generation and bias model correction. To minimize the participation of innocent users, we conduct the unlearning process entirely on the server side, requiring only the server and the targeted client’s participation. Additionally, our approach uses knowledge distillation to restore the performance of the unlearning model. To better protect user privacy and suit real-world scenarios, the distillation process employs pseudo-data, thereby avoiding reliance on external unlabeled data sets. We use the success rate of backdoor attacks to measure the forgetting effect of our method. Extensive experiments show that, with only about five rounds of distillation, Our method can achieve performance close to retraining under different numbers of clients, obtaining results superior to the baseline.
Loading