Keywords: unlearning, diffusion models, safety
TL;DR: Under benign, non-adversarial conditions, fine-tuning a text-to-image diffusion model on seemingly unrelated data can cause it to "relearn" concepts that were previously erased.
Abstract: Text-to-image diffusion models rely on massive, web-scale datasets. Training them from scratch is computationally expensive, and as a result, developers often prefer to make incremental updates to existing models. These updates often compose fine-tuning steps (to learn new concepts or improve model performance) with "unlearning" steps (to "forget" existing concepts, such as copyrighted works or explicit content). In this work, we demonstrate a critical and previously unknown vulnerability that arises in this paradigm: even under benign, non-adversarial conditions, fine-tuning a text-to-image diffusion model on seemingly unrelated images can cause it to ``relearn" concepts that were previously "unlearned." We comprehensively investigate the causes and scope of this phenomenon, which we term \emph{concept resurgence}, by performing a series of experiments across several SOTA concept unlearning methods with subsequent fine-tuning of Stable Diffusion v1.4 and Stable Diffusion v2.1. Our findings underscore the fragility of composing incremental model updates, and raise serious new concerns about current approaches to ensuring the safety and alignment of text-to-image diffusion models.
Submission Number: 66
Loading