Abstract: In recent years, with the development of machine learning, plenty of personal data have been utilized in the training process of the models which incurs severe privacy leakage in the field. Current regulations mandate the removal of private user information from both databases and machine learning models upon specific deletion requests. While wiping data records from memory storage is a straightforward task, eliminating the influence of specific data samples from an already-trained model is challenging. Numerous studies in machine unlearning focus on mitigating the impact of target data by adjusting model parameters in deep learning. However, existing methods for data removal in deep learning often fall short of meeting real-world practicality. The intricacies of deep neural networks make unlearning in deep learning a less expedient process than retraining from scratch. For example, some methods require a ton of matrix calculations or even more complicated computations, and some algorithms need a lot of space, which ends up hindering their effectiveness in real-world applications. Accordingly, we’ve come up with a more practical solution. Our method just needs a small part of the validation dataset to pre-train a mini-model to update the parameters of the target model quickly. With this assistant model, we can quickly and accurately unlearn target classes and items. We prove that our proposed method is highly effective in unlearning tasks with not very large amounts of data.
External IDs:dblp:conf/ica3pp/ZhaoYZ24
Loading