Towards Robust and Scalable Knowledge Editing in Text-to-Image Diffusion Models

Published: 01 Sept 2025, Last Modified: 18 Nov 2025ACML 2025 Conference TrackEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge editing in Text-to-Image(T2I) diffusion models aims to update specific factual associations without disrupting unrelated knowledge. However, existing methods often suffer from unintended collateral effects, where editing a single fact can alter the representation of non-target named entities, degrading generation quality for unrelated prompts, which becomes more severe in real-world, dynamic environments requiring frequent updates. To address this challenge, we introduce a novel editing framework supporting large-scale T2I knowledge editing. Our framework incorporates our proposed Entity-Aware Text Alignment(EATA) to penalize unintended changes in unaffected entities and employs a principled null-space projection strategy to minimize perturbations to existing knowledge. Experimental results demonstrate that our approach enables precise and robust large-scale T2I knowledge editing, preserves the integrity of unrelated content, and maintains high generation fidelity, while offering scalability for continuous editing scenarios.
Submission Number: 242
Loading