VeriFi: Towards Verifiable Federated Unlearning

Published: 01 Jan 2024, Last Modified: 06 Feb 2025IEEE Trans. Dependable Secur. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) has emerged as a privacy-aware collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF), i.e., a leaving participant has the right to request the deletion of its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified, an important aspect that has been overlooked in the current literature. Unlearning verification is particularly challenging in FL as the unlearning effect on one participant's data could be canceled by the contribution of other participants. In this work, we prompt the concept of verifiable federated unlearning and propose VeriFi, a unified framework that allows systematic analysis of federated unlearning and quantification of its effect, with different combinations of various unlearning and verification methods. In VeriFi, the leaving participant is granted the right to verify (RTV) to actively verify the unlearning effect in the next few rounds immediately after notifying the server of its intention to leave, along with local verification done through two steps: 1) marking that fingerprints the leaving participant by specially-designed markers and 2) checking that examines the global model's performance change on the markers. Based on VeriFi, we have conducted so far the most systematic study on verifiable federated unlearning, covering six unlearning methods and five verification methods. Our study sheds light on the existing drawbacks and potential alternatives for both unlearning and verification methods. During the study, we also propose a more efficient and FL-friendly unlearning method $^{u}$S2U, and two more effective and robust non-invasive (without training controllability, external data, white-box model access nor introducing new security risks) verification methods $^{v}$FM and $^{v}$EM. While the proposed methods may not be a panacea for all the challenges, they address several key drawbacks of existing methods and represent a promising step toward effective, efficient, robust, and more importantly, non-invasive federated unlearning and verification. We extensively evaluate VeriFi on seven datasets, including natural/facial/medical images and audios, and four types of deep learning models, including both Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). We hope, such an extensive and holistic experimental evaluation, although admittedly complex and challenging, could help establish important empirical understandings, evidence, and insights for trustworthy federated unlearning.
Loading