everyone
since 08 May 2024">EveryoneRevisionsBibTeXCC BY 4.0
Diffusion models have demonstrated remarkable performance in image generation tasks while also raising security and privacy concerns. To tackle these issues, we propose a method for generating unlearnable examples for diffusion models, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation. Our approach involves designing an algorithm to generate sample-wise perturbation noise for each image to be protected. We frame this as a max-min optimization problem and introduce EUDP, a noise scheduler-based method to enhance the effectiveness of the protective noise. Our experiments demonstrate that training diffusion models on the protected data leads to a significant reduction in the quality of the generated images.