Consistency Verification for Detecting AI-Generated Images

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI-generated image detection, Generative models, Diffusion models, GAN
Abstract: With the rapid development of generative models, AI-generated images have sparked significant concerns regarding their potential misuse for malicious purposes, highlighting the urgent need for AI-generated image detection. Current methods primarily focus on training a binary classifier to detect generated images. However, the efficacy of these methods is critically dependent on the quantity and quality of the collected AI-generated images. More importantly, they suffer from a generalization challenge: \emph{the literature lacks sufficient exploration of whether a binary classifier trained on images from a specific diffusion model can effectively generalize to images generated by other models.} In this work, we propose a novel framework termed \textbf{con}sistency \textbf{v}erification (ConV) for AI-generated image detection, providing a new approach that detects without requiring AI-generated images. In particular, we introduce two functions and establish a principle for designing them so that their outputs remain consistent for natural images but exhibit significant inconsistency for AI-generated images. Our principle shows that gradients of these two functions need to lie within two mutually orthogonal subspaces. This enables a training-free detection approach: an image is identified as AI-generated if transformation along its data manifold results in a substantial change in the loss value of a self-supervised model pre-trained on natural images. This detection framework leads to the unique advantage of ConV over existing methods: \emph{ConV identifies AI-generated images by fitting the distribution of natural images rather than that of AI-generated images.} Extensive experiments across various benchmarks validate the effectiveness of the proposed ConV.
Primary Area: generative models
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8640
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview