Unlearn In a Blink

17 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine unlearning, training-free
Abstract: Machine Unlearning (MU), the technology of erasing undesirable content from Artificial Intelligence (AI) models, plays an essential role in developing safe and trustworthy AI systems. Despite notable advances, the baseline MU methods rely on retraining from scratch without the data targeted for removal, a process that is computationally expensive and financially prohibitive. To address this challenge, we propose a simple yet efficient training-free MU \textbf{baseline} without remaining dataset: \underline{Un}learn In a B\underline{link} (Unlink), serving as a new, fast MU baseline. Our method eliminates the low-dimensional subspaces associated with targeted concepts from the space spanned by the model's weight vectors, thereby rendering the model ``blind" to these undesirable contents. This strategy enables MU across diverse visual tasks, including concept erasure for classification, image generation, and multi-modal applications. Notably, Unlink can produce the scrubbed model instantly with only a few samples and without additional training. Additionally, we extend our method to handle entangled features by leveraging a generalized Rayleigh quotient for forgetting the remaining set, enabling an efficient trade-off between preserving remaining knowledge and suppressing forgetting-set knowledge.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8407
Loading