SSSE: Efficiently Erasing Samples from Trained Machine Learning ModelsDownload PDF

Published: 04 Nov 2021, Last Modified: 15 May 2023PRIML 2021 PosterReaders: Everyone
Keywords: data unlearning, sample erasure, second-order methods
TL;DR: We propose an efficient single-step method of erasing samples from trained machine learning models, which requires access only on the data to be deleted.
Abstract: The availability of large amounts of user-provided data has been key to the success of machine learning for many real-world tasks. Recently, an increasing awareness has emerged that users should be given more control about how their data is used. In particular, users should have the right to prohibit the use of their data for training machine learning systems, and to have it erased from already trained systems. While several sample erasure methods have been proposed, all of them have drawbacks which have prevented them from gaining widespread adoption. In this paper, we propose an efficient and effective algorithm, SSSE, for samples erasure that is applicable to a wide class of machine learning models. From a second-order analysis of the model's loss landscape we derive a closed-form update step of the model parameters that only requires access to the data to be erased, not to the original training set. Experiments on CelebFaces attributes (CelebA) and CIFAR10, show that in certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
Paper Under Submission: The paper is NOT under submission at NeurIPS
1 Reply

Loading