Enabling Preference-driven Unlearning in Few-step Distilled Text-to-Image Diffusion Models
Keywords: Machine Unlearning, Diffusion Models, Text-to-image Models, Few-step Distilled T2I
Abstract: Few-step distilled (FSD) text-to-image diffusion models enable 2–8 step generation, but inherit (and sometimes amplify) unsafe capabilities such as identity and nudity synthesis. Existing unlearning methods, including preference-based objectives (*e.g.*, Diffusion-DPO / DUO) (Wallace et al., 2024; Park et al., 2024), are derived for full-step diffusion and rely on noise-prediction error rewards. We show that this signal misaligns with the FSD posterior mean predictor parameterization and its *few-step* inductive bias, leading to weak post-distillation forgetting. We propose **CePU**: ***Consistency-Enforced Preference-Driven Unlearning***, a preference-driven unlearning objective that replaces the $\epsilon$-error–based reward with a step-wise *consistency* error aligned with few-step dynamics, enabling direct unlearning of FSD models without re-distillation. Across identity removal and nudity unlearning under red-teaming prompts, **CePU** achieves stronger forgetting at comparable or better utility retention, forming a favorable *Pareto* frontier.
Submission Number: 135
Loading