Mitigating Privacy Risk via Forget Set-Free Unlearning

ICLR 2026 Conference Submission16233 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine learning, unlearning, privacy, corrective unlearning, deep learning, risks, approximate unlearning, empirical
Abstract: Training machine learning models requires the storage of large datasets, which often contain sensitive or private data. Storing data is associated with a number of potential risks which increase over time, such as database breaches and malicious adversaries. Machine unlearning is the study of methods to efficiently remove the influence of training data subsets from previously-trained models. Existing unlearning methods typically require direct access to the "forget set"---the data to be forgotten-and organisations must retain this data for unlearning rather than deleting it immediately upon request, increasing risks associated with the forget set. We introduce partially-blind unlearning---utilizing auxiliary information to unlearn without explicit access to the forget set. We introduce a practical framework Reload, a partially-blind method based on gradient optimization and structured weight sparsification to operationalize partially-blind unlearning. We show that Reload efficiently unlearns, approximating models retrained from scratch, and outperforms several forget set-dependent approaches. On language models, Reload unlearns entities using <0.025\% of the retain set and <7\% of model weights in <8 minutes on Llama2-7B. In the corrective case, Reload achieves unlearning even when only 10\% of corrupted data is identified.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16233
Loading