Abstract: In the rapidly evolving landscape of machine learning, the concept of machine unlearning has become crucial for enhancing data privacy and system security. Our research presents an innovative unlearning technique Selective Noise Unlearning (SNU), designed to reduce the model’s dependency on specific data subsets, known as the forget-set. By employing a noise-induced training paradigm, we effectively disrupt the patterns associated with the forget-set, facilitating unlearning within pre-trained models. This approach enhances computational efficiency by eliminating the need for extensive data retention, thereby streamlining the unlearning process. We validate SNU on ResNet18 architecture using CIFAR-10 and MNIST. Through GradCAMs visualizations, we demonstrate the model’s refocused attention following unlearning. Our method’s ability to achieve quick unlearning with as few as one to two epochs of retraining makes it a practical solution for scenarios requiring rapid adaptation. This research enhances data privacy, improves unlearning efficiency, and supports the enforcement of the right to be forgotten, opening avenues for future innovations in machine learning privacy.
Loading