Adaptive Gray: Reducing Color Dependency to Improve Generalization in Deepfake Detection

ICLR 2026 Conference Submission8232 Authors

17 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Diffusion image model; Deepfake Detection; Image Editing
Abstract: Deepfake technology, powered by advanced generative models like GANs and diffusion models, has raised serious ethical and security concerns due to its potential for misuse in creating realistic yet deceptive content. These generative models are becoming increasingly sophisticated, making it harder for humans to distinguish real images from generated ones. This highlights the need for reliable machine-based detection. However, current detection methods face significant challenges in generalization, particularly when dealing with different generative models (cross-generator) and diverse image scenarios (cross-dataset), such as faces, landscapes, and objects, limiting their applicability across various contexts. To address this challenge, we identified that the color dependency can often be unnecessary and may even impede deepfake detection performance. Building on this insight, we introduce Adaptive Gray (AG), a novel approach designed to improve classifier generalization by compressing the RGB channels of images. Our experiments on the large-scale GenImage dataset demonstrate that Adaptive Gray achieves the highest improvement of 19.9\% in average ACC, 22.0\% in AP, and 20.1\% in TPR (at FPR=5\%), consistently outperforming state-of-the-art classifiers. Meanwhile, inference efficiency improved by at most 1 $\times 10^{4}$ times.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 8232
Loading