Federated Unlearning with Diffusion Models

28 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Unlearning, Diffusion Model
Abstract: In recent years, diffusion models are widely adopted by individual users due to their outstanding performance in generation. During usage, individual users develop a need to forget privacy-related contents, making the scenario of using diffusion models on the clients a natural federated unlearning setting. For this scenario, we propose FedDUL, a Federated UnLearning method with Diffusion models, which addresses the unlearn requests from clients using the diffusion models. On one hand, we utilize local data on the clients to perform attention-based unlearning, enabling the local diffusion model to forget the concepts specified by the clients. On the other hand, we filter and group the unlearn requests from clients, gradually aggregating reasonable requests into the global diffusion model on the server, thereby protecting client privacy within the global model. The theoretical analysis further demonstrates the inherent unity between the federated unlearning problem based on diffusion models and federated learning, and extend this unity to traditional federated unlearning methods. Extensive quantitation and visualization experiments are conducted to evaluate the unlearning of both local and global models and discuss the communication and computation costs of our method, demonstrating that our method can satisfy the unlearn requests of multiple clients without compromising the generative capabilities for irrelevant concepts, providing new ideas and methods for the application of diffusion models in federated unlearning.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13406
Loading