Prompt Certified Machine Unlearning with Randomized Gradient Smoothing and QuantizationDownload PDF

Published: 31 Oct 2022, 18:00, Last Modified: 12 Oct 2022, 19:46NeurIPS 2022 AcceptReaders: Everyone
Keywords: Certified machine unlearning, randomized gradient smoothing, gradient quantization, theoretical guarantee, prompt unlearning
TL;DR: Prompt certified machine unlearning for simultaneous training and unlearning in advance that responses multiple unlearning requests at a time
Abstract: The right to be forgotten calls for efficient machine unlearning techniques that make trained machine learning models forget a cohort of data. The combination of training and unlearning operations in traditional machine unlearning methods often leads to the expensive computational cost on large-scale data. This paper presents a prompt certified machine unlearning algorithm, PCMU, which executes one-time operation of simultaneous training and unlearning in advance for a series of machine unlearning requests, without the knowledge of the removed/forgotten data. First, we establish a connection between randomized smoothing for certified robustness on classification and randomized smoothing for certified machine unlearning on gradient quantization. Second, we propose a prompt certified machine unlearning model based on randomized data smoothing and gradient quantization. We theoretically derive the certified radius R regarding the data change before and after data removals and the certified budget of data removals about R. Last but not least, we present another practical framework of randomized gradient smoothing and quantization, due to the dilemma of producing high confidence certificates in the first framework. We theoretically demonstrate the certified radius R' regarding the gradient change, the correlation between two types of certified radii, and the certified budget of data removals about R'.
Supplementary Material: pdf
16 Replies

Loading