Shielded Diffusion: Generating Novel and Diverse Images using Sparse Repellency

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Guiding text-to-image diffusion trajectories away from protected images.
Abstract: The adoption of text-to-image diffusion models raises concerns over reliability, drawing scrutiny under the lens of various metrics like calibration, fairness, or compute efficiency. We focus in this work on two issues that arise when deploying these models: a lack of diversity when prompting images, and a tendency to recreate images from the training set. To solve both problems, we propose a method that coaxes the sampled trajectories of pretrained diffusion models to land on images that fall outside of a reference set. We achieve this by adding repellency terms to the diffusion SDE throughout the generation trajectory, which are triggered whenever the path is expected to land too closely to an image in the shielded reference set. Our method is sparse in the sense that these repellency terms are zero and inactive most of the time, and even more so towards the end of the generation trajectory. Our method, named SPELL for sparse repellency, can be used either with a static reference set that contains protected images, or dynamically, by updating the set at each timestep with the expected images concurrently generated within a batch, and with the images of previously generated batches. We show that adding SPELL to popular diffusion models improves their diversity while impacting their FID only marginally, and performs comparatively better than other recent training-free diversity methods. We also demonstrate how SPELL can ensure a shielded generation away from a very large set of protected images by considering all 1.2M images from ImageNet as the protected set.
Lay Summary: When artificial intelligence (AI) generates images, there is little control over how it generates images. Often, when giving it a description, an AI model will generate one specific image, and not provide a variety of images to choose from. We introduce a mechanism that makes generative AI models, so called diffusion models, output many different images for a given description. This increases the creativity, without impacting the image quality or its closeness to the description. Our mechanism, called SPELL, has another advantage: It can also avoid generating specific images. For example, we protect 1.2 million images, to make sure that whatever image the AI model generates, it is different enough from all of these existing images.
Primary Area: Deep Learning->Generative Models and Autoencoders
Keywords: Diffusion Model, Guidance, Repellency, Diversity
Submission Number: 7458
Loading