Keywords: Diffusion Models, Style Unlearning
Abstract: For diffusion models, machine unlearning is crucial for mitigating the intellec-
tual property and ethical challenges arising from unauthorized style replication.
However, most existing unlearning methods struggle to completely remove styles
while preserving generation quality, as their erasure mechanisms rely on the noise
distribution where style and content are intrinsically entangled. To address it, we
propose $\textbf{S}$tyle $\textbf{U}$nlearning in $\textbf{D}$iffusion $\textbf{M}$odels (SUDM), a novel framework based
on hybrid-attention distillation, where cross-attention provides style-agnostic su-
pervision to self-attention for targeted style erasure. By leveraging the structural
distinctions within attention component, SUDM enables more effective destylized modeling compared to previous work. To further ensure content preservation and robust generalization, we introduce query consistency and parameter consis-
tency losses into the overall objective function. Finally, extensive experiments and user studies on Stable Diffusion demonstrate that SUDM achieves more thorough style erasure with minimal quality degradation, outperforming existing unlearning methods in both visual fidelity and precision. Our code is available in the supplementary materials.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 11152
Loading