GaussMarker: Robust Dual-Domain Watermark for Diffusion Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose the first dual-domain watermark for diffusion models and achieve state-of-the-art watermark detection performance.
Abstract: As Diffusion Models (DM) generate increasingly realistic images, related issues such as copyright and misuse have become a growing concern. Watermarking is one of the promising solutions. Existing methods inject the watermark into the *single-domain* of initial Gaussian noise for generation, which suffers from unsatisfactory robustness. This paper presents the first *dual-domain* DM watermarking approach using a pipelined injector to consistently embed watermarks in both the spatial and frequency domains. To further boost robustness against certain image manipulations and advanced attacks, we introduce a model-independent learnable Gaussian Noise Restorer (GNR) to refine Gaussian noise extracted from manipulated images and enhance detection robustness by integrating the detection scores of both watermarks. GaussMarker efficiently achieves state-of-the-art performance under eight image distortions and four advanced attacks across three versions of Stable Diffusion with better recall and lower false positive rates, as preferred in real applications.
Lay Summary: As AI-generated images become more realistic, concerns about their misuse and copyright issues are growing. One way to address this is by embedding invisible watermarks into the images. Most current methods only add watermarks in one part of the image generation process — the so-called “noise image” — but these watermarks can be easily removed or broken by simple image edits. In this work, we introduce GaussMarker, the first method that embeds watermarks in two domains: both in the spatial structure and frequency patterns of the noise used to generate images. We also design a tool called Gaussian Noise Restorer (GNR) that helps recover and verify watermarks even after the image has been edited. Our approach works across different versions of popular image-generating models like Stable Diffusion, offering stronger protection with high accuracy and fewer false alarms — making it well-suited for real-world use. We believe that our approach can mitigate issues such as copyright infringement and misuse associated with DM, thereby promoting the development of trustworthy generative AI.
Primary Area: Social Aspects->Security
Keywords: Diffusion Models, Watermark
Submission Number: 5912
Loading