DUAW: Data-free Universal Adversarial Watermark against Stable Diffusion Customization

Published: 04 Mar 2024, Last Modified: 14 Apr 2024SeT LLM @ ICLR 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Copyright Protection, Stable Diffusion Customization, Adversarial Attack
TL;DR: A data-free, universal watermark capable of disrupting Stable Diffusion customization and inducing distorted output.
Abstract: Stable Diffusion (SD) customization approaches enable users to personalize SD model outputs, greatly enhancing the flexibility and diversity of AI art. However, they also allow individuals to plagiarize specific styles or subjects from copyrighted images, which raises significant concerns about potential copyright infringement. To address this issue, we propose an invisible data-free universal adversarial watermark (DUAW), aiming to protect copyrighted images from different customization approaches across various versions of SD models. First, DUAW is designed to disrupt the variational autoencoder during SD customization. Second, DUAW is trained on synthetic images produced by a Large Language Model (LLM) and a pretrained SD model, that is, it is generated in a data-free manner without the use of any copyrighted images. Once crafted, DUAW can be imperceptibly integrated into any copyrighted image, serving as a protective measure by inducing significant distortions in the images generated by customized SD models. Experimental results demonstrate that DUAW can distort the outputs of fine-tuned SD models, making them discernible to both human observers and a simple classifier, yielding more effective protection results than existing methods.
Submission Number: 48
Loading