Keywords: Machine Unlearning, Low Rank Adaptation, Vision Transformer
Abstract: Growing privacy regulations have made machine unlearning an essential process for removing the influence of specific data points from trained models. While retraining on the remaining dataset is a straightforward solution, it incurs high computational costs and requires access to the retained dataset, which may not always be practical. Existing unlearning methods, such as gradient ascent, often suffer from unstable optimization and catastrophic forgetting. Recent studies have demonstrated that by training fewer parameters, Low-Rank Adaptation (LoRA) constrains updates to prevent significant divergence from the base model, effectively mitigating catastrophic forgetting. Building on this insight, we propose a novel framework, NegLoRA that leverages LoRA to enhance the efficiency and effectiveness of machine unlearning. Experimental results across various metrics indicate that NegLoRA outperforms baseline methods in unlearning accuracy, generalization, and robustness to inference attacks while being computationally efficient. Our code is available at \url{https://github.com/AAAI-ColorAI/NegLoRA}
Submission Number: 23
Loading