Ready2Unlearn: A Learning-Time Approach for Preparing Models with Future Unlearning Readiness

ICLR 2026 Conference Submission15075 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine unlearning, Meta-learning, Data privacy
TL;DR: This paper introduces Ready2Unlearn, a learning-time optimization approach designed to facilitate future unlearning processes.
Abstract: This paper introduces Ready2Unlearn, a learning-time optimization approach designed to facilitate future unlearning processes. Unlike the majority of existing unlearning efforts that focus on designing unlearning algorithms, which are typically implemented reactively when an unlearning request is made during the model deployment phase, Ready2Unlearn shifts the focus to the training phase, adopting a "forward-looking" perspective. Building upon well-established meta-learning principles, Ready2Unlearn proactively trains machine learning models with unlearning readiness, such that they are well prepared and can handle future unlearning requests in a more efficient and principled manner. Ready2Unlearn is model-agnostic and compatible with any gradient ascent-based machine unlearning algorithms. We evaluate the method on both vision and language tasks under various unlearning settings, including class-wise unlearning and random data unlearning. Experimental results show that by incorporating such preparedness at training time, Ready2Unlearn produces an unlearning-ready model state, which offers several key advantages when future unlearning is requested, including reduced unlearning time, improved retention of overall model capability, and enhanced resistance to the inadvertent recovery of forgotten data. We hope this study could inspire future work to explore more proactive strategies for equipping machine learning models with built-in readiness towards more reliable and principled machine unlearning.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15075
Loading