Ensuring Life-long Forgetting in Sequential Unlearning via Source-free Optimization

ICLR 2026 Conference Submission16939 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Unlearning, Sequential Unlearning, Source-free
Abstract: Machine unlearning has emerged as a crucial area due to increasing privacy and security concerns. However, most existing methods focus on batch unlearning, processing requests in a batch manner, which is impractical for real-world scenarios where unlearning requests can occur at any time. This paper explores a more practical approach, i.e., sequential unlearning, where requests must be processed instantly as they arise. We identify two main challenges with current methods when applied to sequential unlearning, i.e., failure to ensure life-long forgetting, and inefficiency in processing sequential requests. To overcome these challenges, we propose a novel unlearning method tailored for sequential unlearning. Firstly, we incorporate an additional life-long forgetting term into the unlearning objective, and transform risk maximization into minimization to ensure stable optimization. Secondly, we establish a source-free optimization by leveraging the loss bound and model parameters. This approach not only avoids the considerable computational costs on the retain set, but also eliminates the need for data from past unlearning rounds. Extensive experiments on benchmark datasets demonstrate that our proposed method i) effectively ensures life-long forgetting, ii) maintains the model functionality on the retain set, and iii) exhibits a significant efficiency advantage.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 16939
Loading