SD4Privacy: Exploiting Stable Diffusion for Protecting Facial Privacy

Published: 01 Jan 2024, Last Modified: 13 Nov 2024ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recently, adversarial examples are introduced to protect personal images from being identified by unauthorized face recognition systems. Existing approaches follow the transfer-based adversarial attack paradigm, where local surrogate models are utilized to generate protected images. However, these surrogate models can neither be necessary nor efficient for generating adversarial examples. In this paper, we propose SD4Privacy, i.e., Stable Diffusion for Privacy, which exploits the latent space of Stable Diffusion Model to synthesize adversarial examples. First, we learn an optimal textual embedding of target image to preserve its representative semantics, directly guiding the sampling process of synthesized image. Then, we utilize the encoder of UNet in Stable Diffusion as the substitution of surrogate classification models, which enables the efficient adversarial guidance by semantic h-space of UNet for adversarial example generation. Experiments show the state-of-the-art protection performance, as well as high-quality protected images with visual naturalness and imperceptible perturbations.
Loading