Abstract: Large language models trained on web-scale corpora can memorize undesirable data containing misinformation, copyrighted material, or private or sensitive information.
Recently, several machine unlearning algorithms have been proposed to eliminate the effect of such datapoints from trained models-- that is, to approximate *a model that had never been trained on these datapoints in the first place*.
However, evaluating the effectiveness of unlearning algorithms remains an open challenge.
Previous work has relied on heuristics-- such as verifying that the model can no longer reproduce the specific information targeted for removal while maintaining accuracy on unrelated test data.
These approaches inadequately capture the complete effect of reversing the influence of datapoints on a trained model.
In this work, we propose the RESTOR framework for machine unlearning evaluation, which assesses the ability of unlearning algorithms for targeted data erasure, by evaluating the ability of models to forget the knowledge introduced in these datapoints, while simultaneously recovering the model's knowledge state had it never encountered these datapoints.
RESTOR helps uncover several novel insights about popular unlearning algorithms,
and the mechanisms through which they operate--
for instance, identifying that some algorithms merely emphasize forgetting but not recovering knowledge,
and that localizing unlearning targets can enhance unlearning performance.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We refined the draft to incorporate the discussion from the rebuttal period, including an expanded related work section, a detailed discussion of limitations regarding scope and scalability, additional experiments as requested by the reviewers, revised abstract and introduction, a minor refinement of the title, and improved figures based on reviewer feedback.
Code: https://github.com/k1rezaei/restor
Supplementary Material: zip
Assigned Action Editor: ~Tongliang_Liu1
Submission Number: 4270
Loading