Keywords: Machine Unlearning, Kernelized Stein Discrepancy
Abstract: In response to recent privacy protection regulations, machine unlearning has attracted great interest in the research community. However, existing studies often demonstrate their approaches' effectiveness by measuring the overall unlearning success rate rather than evaluating the chance of unlearning specific training samples, leaving the universal feasibility of the unlearning operation unexplored. This paper proposes a novel method to quantify the difficulty of unlearning a single sample by taking into account factors such as model and data distribution. Specifically, we propose several heuristics to understand the condition of a successful unlearning operation on data points, explore difference in unlearning difficulty over training data points, and suggest a potential ranking mechanism for identifying the most challenging samples to unlearn. In particular, we note Kernelized Stein Discrepancy (KSD), a parameterized kernel function tailored to each model and dataset, is an effective heuristic to tell the difficulty of unlearning a data sample. We demonstrate our discovery by including multiple classification tasks and existing machine unlearning algorithms, highlighting the practical feasibility of unlearning operations across different scenarios.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8814
Loading