Fedu: Federated unlearning via user-side influence approximation forgetting

Published: 19 Dec 2024, Last Modified: 28 Jan 2026IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTINGEveryoneCC BY-NC-SA 4.0
Abstract: Machine unlearning has become a significant research topic on a global scale due to the increasing importance of privacy protection, particularly in light of the right to be forgotten legislation. Although many solutions are proposed, the current mainstream centralized machine unlearning studies are not feasible in federated learning (FL), where the server has no access to any users’ unlearning samples. In this paper, we aim to tackle the federated unlearning problem by proposing a Federated Unlearning (FedU) scheme via a user-side influence approximation forgetting method, thereby eliminating the need to share raw data with the server. In FedU, only users who have unlearning needs execute the influence approximation forgetting, while other users and the server just conduct the same operations as they did in FL. The proposed influence approximation forgetting method achieves unlearning by estimating the influence of the erased samples relying on only the user's local data and eliminating this influence from the model. However, the model utility is still negatively influenced by directly removing the influence estimation. To mitigate the side effects of unlearning, we propose a utility preservation method that simultaneously trains the unlearned model based on the unlearning requesters’ remaining local dataset. We design an adaptive optimization method to balance the forgetting and utility preservation effectiveness optimally during the unlearning process. Extensive evaluations on three representative public datasets demonstrate that our proposed method significantly outperforms state-of-the-art methods in both effectiveness and efficiency, avoiding more than 3% accuracy degradation when the number of unlearning requesters is large.
Loading