Keywords: Machine Unlearning, Continual Learning, Hypernetworks
TL;DR: A unified framework for continual learning and data-free unlearning using hypernetworks.
Abstract: Recent advances in deep learning require models to exhibit continual learning capability, allowing them to learn new tasks and progressively accumulate knowledge without forgetting old tasks. Concurrently, there are growing concerns and regulatory requirements to meet privacy and safety by discarding some knowledge through machine unlearning. With the rapidly rising relevance of continual learning and machine unlearning, we consider them together under a unified framework in this paper. However, the conflicting nature of past data unavailability arising from continual learning makes it challenging to perform unlearning with existing methods which assume data availability. Moreover, in the proposed setup, where tasks are repeatedly learned and unlearned in a sequence, it is another challenge to maintain the stability of the tasks that need to be retained. To address these challenges, we propose UnCLe, an Unlearning Framework for Continual Learning designed to learn tasks incrementally and unlearn tasks without access to past data. To perform data-free unlearning, UnCLe leverages hypernetworks in conjunction with an unlearning objective that seeks to selectively align task-specific parameters with noise. Our experiments on popular benchmarks demonstrate UnCLe's consistent unlearning completeness and ability to preserve task stability over long sequences.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13520
Loading