Federated Unlearning with Multiple Client Partitions

Published: 01 Jan 2024, Last Modified: 12 Oct 2024ICC 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) has recently received more and more attention in the joint field of distributed machine learning (ML) and privacy computing. Similar to the traditional ML systems, there exists the need of effective and efficient unlearning algorithms to unlearn certain training data from the FL model. The traditional machine unlearning algorithms have limitations for the FL systems, since the data of clients are both private and non-IID. In this paper, we propose a new algorithm for federated unlearning called FedUMP to improve the model performance and accelerate the unlearning process. Its main idea is to first create multiple different client partition strategies, each of which divides the clients into several subsets. Then we independently train subset models for all client subsets and aggregate the results of subset models for predictions. Furthermore, we propose a retraining acceleration method to reduce the time consumption with multiple partitions, and a partition strategy design method to search for good partition strategies efficiently. Extensive experiments on various datasets and model architectures demonstrate that FedUMP improves both model performance and unlearning speed.
Loading