Forget Many, Forget Right: Scalable and Precise Concept Unlearning in Diffusion Models

Published: 26 Jan 2026, Last Modified: 17 Feb 2026ICLR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine unlearning, large-scale unlearning, diffusion model
Abstract: While multi-concept unlearning has shown progress, extending to large-scale scenarios remains difficult, as existing methods face three persistent challenges: **(i)** they often introduce conflicting weight updates, making some targets difficult to unlearn or causing degradation of generative capability; **(ii)** they lack precise mechanisms to keep unlearning strictly confined to target concepts, resulting in collateral damage on similar content; **(iii)** many approaches rely on additional data or auxiliary modules, causing scalability and efficiency bottlenecks as the number of concepts grows. To simultaneously address these challenges, we propose **Scalable-Precise Concept Unlearning (ScaPre)**, a unified and lightweight framework tailored for scalable and precise large-scale unlearning. ScaPre introduces a *conflict-aware stable design*, which integrates the spectral trace regularizer and geometry alignment to stabilize the optimization space, suppress conflicting updates, and preserve the pretrained global structure. Furthermore, the *Informax Decoupler* identifies concept-relevant parameters and adaptively reweights updates, ensuring that unlearning is confined to the target subspace without collateral damage. ScaPre yields an efficient closed-form solution, requiring no additional data or auxiliary sub-models, while maintaining both scalability and precision. Comprehensive experiments across large-scale objects, styles, and explicit content benchmarks demonstrate that ScaPre effectively removes target concepts while maintaining generation quality. It can forget up to **×5** more concepts than the best baseline within the limits of acceptable generative quality, and outperforms existing multi-concept approaches in precision and efficiency, achieving a new state of the art for large-scale unlearning.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 2845
Loading