Keywords: Weakly-supervised learning
Verify Author List: I have double-checked the author list and understand that additions and removals will not be allowed after the submission deadline.
Abstract: Unlearning methods that rely solely on forgetting data typically modify the network’s decision boundary to achieve unlearning. However, these approaches are susceptible to the "relearning" problem, whereby the network may recall the forgotten class upon subsequent updates with the remaining class data. Our experimental analysis reveals that, although these modifications alter the decision boundary, the network's fundamental perception of the samples remains mostly unchanged. In response to the relearning problem, we introduce the Perception Revising Unlearning (PRU) framework. PRU employs a probability redistribution method, which assigns new labels and more precise supervision information to each forgetting class instance. The PRU actively shifts the network's perception of forgetting class samples toward other remaining classes. The experimental results demonstrate that PRU not only has good classification effectiveness but also significantly reduces the risk of relearning, suggesting a robust approach to class unlearning tasks that depend solely on forgetting data.
A Signed Permission To Publish Form In Pdf: pdf
Supplementary Material: pdf
Url Link To Your Supplementary Code: https://github.com/DATA-Transpose/PRU
Primary Area: Trustworthy Machine Learning (accountability, explainability, transparency, causality, fairness, privacy, robustness, autoML, etc.)
Paper Checklist Guidelines: I certify that all co-authors of this work have read and commit to adhering to the guidelines in Call for Papers.
Student Author: Yes
Submission Number: 371
Loading