Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation

Published: 08 Apr 2024, Last Modified: 19 May 2024ICLR 2024 R2-FM Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: unlearnable examples; diffusion models
Abstract:

Diffusion models have demonstrated remarkable performance in image generation tasks while also raising security and privacy concerns. To tackle these issues, we propose a method for generating unlearnable examples for diffusion models, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation. Our approach involves designing an algorithm to generate sample-wise perturbation noise for each image to be protected. We frame this as a max-min optimization problem and introduce EUDP, a noise scheduler-based method to enhance the effectiveness of the protective noise. Our experiments demonstrate that training diffusion models on the protected data leads to a significant reduction in the quality of the generated images.

Submission Number: 57
Loading