Minimalist Concept Erasure in Generative Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
TL;DR: A concept erasure method that works for SOTA rectified flow DiT models
Abstract: Recent advances in generative models have demonstrated remarkable capabilities in producing high-quality images, but their reliance on large-scale unlabeled data has raised significant safety and copyright concerns. Efforts to address these issues by erasing unwanted concepts have shown promise. However, many existing erasure methods involve excessive modifications that compromise the overall utility of the model. In this work, we address these issues by formulating a novel minimalist concept erasure objective based *only* on the distributional distance of final generation outputs. Building on our formulation, we derive a tractable loss for differentiable optimization that leverages backpropagation through all generation steps in an end-to-end manner. We also conduct extensive analysis to show theoretical connections with other models and methods. To improve the robustness of the erasure, we incorporate neuron masking as an alternative to model fine-tuning. Empirical evaluations on state-of-the-art flow-matching models demonstrate that our method robustly erases concepts without degrading overall model performance, paving the way for safer and more responsible generative models.
Lay Summary: Generative AI models, which can create lifelike images from simple text prompts, are transforming how we design, create, and communicate. But this powerful ability comes with serious risks: these models can unknowingly generate harmful, biased, or copyrighted content. While researchers have explored ways to “erase” such unwanted concepts, most existing methods are heavy-handed, they often damage the model’s overall usefulness. Our research proposes a more precise and reliable solution. We introduce a minimalist approach to concept erasure that focuses on the model’s final outputs, avoiding unnecessary changes to its internal workings. By guiding the model through all its generation steps, we ensure that it stops producing specific content, without compromising its broader creative abilities. To strengthen this process, we use a technique called neuron masking, which allows for targeted control without the need for retraining. The result is a safer, more responsible generative AI — one that retains its strengths while respecting ethical and legal boundaries.
Primary Area: Social Aspects->Safety
Keywords: Concept removal, unlearning, flow-matching models
Submission Number: 2240
Loading