Efficient Machine Unlearning for Deep Generative Models by Mitigating Optimization Conflicts

26 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine unlearning, duffusion model
Abstract: Machine unlearning of deep generative model refers to the process of modifying or updating a pre-trained generative model to forget or remove certain patterns or information it has learned. Existing research on Bayesian-based unlearning from various deep generative models has highlighted low efficiency as a significant drawback due to two primary causes. Firstly, Bayesian methods often overlook correlations between data to forget and data to remember, leading to conflicts during gradient descent and much slower convergence. Additionally, they require aligning updated model parameters with the original ones to maintain the generation ability of the updated model, further reducing efficiency. To address these limitations, we propose an Efficient Bayesian-based Unlearning method for various deep generative models called EBU. By identifying the relevant weights pertaining to the data to forget and the data to remember, EBU only preserves the parameters related to data to remember, improving the efficiency. Additionally, EBU balances the gradient descent directions of shared parameters to adeptly manage the conflicts caused by the correlations between data to forget and data to remember, leading to a more efficient unlearning process. Extensive experiments on multiple generative models demonstrate the superiority of our proposed EBU.
Supplementary Material: zip
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5433
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview