Are handcrafted filters helpful for attributing AI-generated images?

Published: 20 Jul 2024, Last Modified: 21 Jul 2024MM2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recently, a vast number of image generation models have been proposed, which raises concerns regarding the misuse of these artificial intelligence (AI) techniques for generating fake images. To attribute the AI-generated images, existing schemes usually design and train deep neural networks (DNNs) to learn the model fingerprints, which usually requires a large amount of data for effective learning. In this paper, we aim to answer the following two questions for AI-generated image attribution, 1) is it possible to design useful handcrafted filters to facilitate the fingerprint learning? and 2) how we could reduce the amount of training data after we incorporate the handcrafted filters? We first propose a set of Multi-Directional High-Pass Filters (MHFs) which are capable to extract the subtle fingerprints from various directions. Then, we propose a Directional Enhanced Feature Learning network (DEFL) to take both the MHFs and randomly-initialized filters into consideration. The output of the DEFL is fused with the semantic features to produce a compact fingerprint. To make the compact fingerprint discriminative among different models, we propose a Dual-Margin Contrastive (DMC) loss to tune our DEFL. Finally, we propose a reference based fingerprint classification scheme for image attribution. Experimental results demonstrate that it is indeed helpful to use our MHFs for attributing the AI-generated images. The performance of our proposed method is significantly better than the state-of-the-art for both the closed-set and open-set image attribution, where only a small amount of images are required for training.
Primary Subject Area: [Generation] Social Aspects of Generative AI
Relevance To Conference: In recent years, there has been rapid development in AI image generation models, raising public concerns over the misuse of these models to generate fake images. These fake images could cause negative impacts on society, such as the spread of fake news, deceptive advertising. Since AI image generation plays an important role in the field of multimedia, it is urgent for multimedia researchers to develop effective schemes that identify which models are maliciously used to generate fake images. With this concern, our work focuses on addressing the problem of AI-generated image attribution, i.e., attributing images to real, specific GANs and diffusion models that are seen in training, unseen GANs, and unseen diffusion models. We design a set of Multi-Directional High-Pass Filters (MHFs) for learning model-representative features, which are fused with semantic features for image attribution. Our proposed method achieves the state-of-the-art performance in various challenging scenarios, and only a small number of training data are required. By tackling with the crucial issue of AI-generated image attribution, our work is highly correlated to the topic of “Social Aspects of Generative AI”. Our work promotes the security of AI image generation models, contributing to the development of multimedia research.
Submission Number: 254
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview