Generalization through Discrepancy: Leveraging Distributional Fitting Gaps for AI-Generated Image Detection

ICLR 2026 Conference Submission5343 Authors

15 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: AI-Generated Image Detection, Distributional Discrepancy, Pre-training for Detection, Cross-Model Detection
Abstract: The generalization of detectors for AI-generated images remains a critical challenge, as methods trained on one generative family often fail when tested on unseen architectures. To tackle this generalization challenge, we dive into the inherent distribution approximation nature of generative modeling and posit that a universal forensic signal lies in the discrepancy between mathematically precise image rescaling traces and the imperfect approximations learned by generative models through training data. We introduce a novel contrastive pre-training framework that sensitizes a feature extractor to these subtle rescaling artifacts by leveraging their inherent periodic patterns and position shift properties, using only real images for training. Our method sets a new state-of-the-art on both GAN and diffusion-generated benchmarks, validating the efficacy of our method. We introduce a new and robust perspective on detection generalization through the lens of distributional fitting divergence. The code and models will be made publicly available.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 5343
Loading