Style Unlearning in Diffusion Models

18 Sept 2025 (modified: 14 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion Models, Style Unlearning
Abstract: For diffusion models, machine unlearning is crucial for mitigating the intellec- tual property and ethical challenges arising from unauthorized style replication. However, most existing unlearning methods struggle to completely remove styles while preserving generation quality, as their erasure mechanisms rely on the noise distribution where style and content are intrinsically entangled. To address it, we propose $\textbf{S}$tyle $\textbf{U}$nlearning in $\textbf{D}$iffusion $\textbf{M}$odels (SUDM), a novel framework based on hybrid-attention distillation, where cross-attention provides style-agnostic su- pervision to self-attention for targeted style erasure. By leveraging the structural distinctions within attention component, SUDM enables more effective destylized modeling compared to previous work. To further ensure content preservation and robust generalization, we introduce query consistency and parameter consis- tency losses into the overall objective function. Finally, extensive experiments and user studies on Stable Diffusion demonstrate that SUDM achieves more thorough style erasure with minimal quality degradation, outperforming existing unlearning methods in both visual fidelity and precision. Our code is available in the supplementary materials.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 11152
Loading