Denoising Trajectory Biases for Zero-Shot AI-Generated Image Detection

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY-NC 4.0
Keywords: AI-generated image detection, diffusion model, ddim, gan
TL;DR: A novel zero-shot generated image detection
Abstract: The rapid advancement of generative models has led to the widespread emergence of highly realistic synthetic images, making the detection of AI-generated content increasingly critical. In particular, diffusion models have recently achieved unprecedented levels of visual fidelity, further raising concerns. While most existing approaches rely on supervised learning, zero-shot detection methods have attracted growing interest due to their ability to bypass data collection and maintenance. Nevertheless, the performance of current zero-shot methods remains limited. In this paper, we introduce a novel zero-shot AI-generated image detection method. Unlike previous works that primarily focus on identifying artifacts in the final generated images, our work explores features within the image generation process that can be leveraged for detection. Specifically, we simulate the image sampling process via diffusion-based inversion and observe that the denoising outputs of generated images converge to the target image more rapidly than those of real images. Inspired by this observation, we compute the similarity between the original image and the outputs along the denoising trajectory, which is then used as an indicator of image authenticity.Since our method requires no training on any generated images, it avoids overfitting to specific generative models or dataset biases. Experiments across a wide range of generators demonstrate that our method achieves significant improvements over state-of-the-art supervised and zero-shot counterparts.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 3234
Loading