DGS: Robust and Diverse Watermarks for Diffusion Models

ICLR 2026 Conference Submission15563 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: watermarking, diffusion models
Abstract: Recent advances in diffusion-based generative models, such as Stable Diffusion, have transformed image generation, making it possible to create high-quality and diverse content from textual prompts. However, these advancements also raise concerns about intellectual property theft and the authenticity of generated content. A promising solution to these issues is watermarking, which embeds hidden information into generated content to ensure traceability and protect intellectual property. In this paper, we propose Dynamic Gaussian Shading (DGS), a novel watermarking method specifically designed for diffusion models. DGS uses a dynamic, distance-aware re-localization approach for watermark embedding that adapts to the latent space of generative models, enhancing both the robustness of the watermark and the diversity of the generated images. We evaluate DGS in terms of its watermarking effectiveness, resistance to various attacks, and the diversity of generated images. Our experimental results show that DGS achieves high watermark accuracy, maintains robustness against attacks, and preserves image quality. Furthermore, we introduce a new metric, Encoded Feature Diversity (EFD), to measure the diversity of generated images across different watermarking methods. Compared to existing baseline methods, DGS strikes a significantly improved balance between watermark reliability and image generation diversity. The proposed method provides a promising approach to embedding watermarks in generative models, supporting the secure use of AI-generated content while maintaining the creative potential of these powerful tools.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15563
Loading