Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion Models

Published: 23 Jun 2023, Last Modified: 12 Jul 2023DeployableGenerativeAIEveryoneRevisions
Keywords: safety, detoxification, diffusion, text-to-image, generative models, trustworthy ai
TL;DR: This paper introduces SDD method to mitigate harmful or copyrighted content generation in large-scale text-to-image models, enabling the simultaneous removal of multiple concepts without compromising image quality.
Abstract: Large-scale image generation models, with impressive quality made possible by the vast amount of data available on the Internet, raise social concerns that these models may generate harmful or copyrighted content. The biases and harmfulness arise throughout the entire training process and are hard to completely remove, which have become significant hurdles to the safe deployment of these models. In this paper, we propose a method called SDD to prevent problematic content generation in text-to-image diffusion models. We self-distill the diffusion model to guide the noise estimate conditioned on the target removal concept to match the unconditional one. Compared to the previous methods, our method eliminates a much greater proportion of harmful content from the generated images without degrading the overall image quality. Furthermore, our method allows the removal of multiple concepts at once, whereas previous works are limited to removing a single concept at a time.
Submission Number: 40
Loading