Abstract: Diffusion models, particularly Stable Diffusion Models (SDMs), have recently emerged as a focal point within the generative artificial intelligence sector, acclaimed for their superior visual fidelity and versatility. Despite their rising prominence, the challenge of detecting SDM-generated images has been somewhat overlooked, sparking concerns over their potential misuse for nefarious purposes. This paper aims to delve into the complexities of differentiating authentic images from those generated by SDMs, offering three significant contributions to the field. Firstly, we introduce a varied synthetic image dataset named SDM-Fakes, which consists of six subsets utilizing txt2img, img2img, and inpainting techniques. Secondly, we develop both CNN-based and Transformer-based detection models to identify artificial images, assessing a range of cutting-edge models. Thirdly, we pioneer the evaluation of these detection models’ generalization capabilities across different schemes. We also explored the impact of unknown perturbations on those detectors. Through comprehensive testing, we demonstrate that while current models are adept at recognizing SDM-generated images, there is a significant need to enhance their ability to generalize cross-scheme tasks, as well as robustness on unknown perturbations.
Loading