TL;DR: We introduce the OriPID dataset and a generalizable method with theoretical guarantees to identify original images from their text-guided diffusion model translations.
Abstract: Text-guided image-to-image diffusion models excel in translating images based on textual prompts, allowing for precise and creative visual modifications. However, such a powerful technique can be misused for *spreading misinformation*, *infringing on copyrights*, and *evading content tracing*. This motivates us to introduce the task of origin **ID**entification for text-guided **I**mage-to-image **D**iffusion models (**ID$\mathbf{^2}$**), aiming to retrieve the original image of a given translated query. A straightforward solution to ID$^2$ involves training a specialized deep embedding model to extract and compare features from both query and reference images. However, due to *visual discrepancy* across generations produced by different diffusion models, this similarity-based approach fails when training on images from one model and testing on those from another, limiting its effectiveness in real-world applications. To solve this challenge of the proposed ID$^2$ task, we contribute the first dataset and a theoretically guaranteed method, both emphasizing generalizability. The curated dataset, **OriPID**, contains abundant **Ori**gins and guided **P**rompts, which can be used to train and test potential **ID**entification models across various diffusion models. In the method section, we first prove the *existence* of a linear transformation that minimizes the distance between the pre-trained Variational Autoencoder embeddings of generated samples and their origins. Subsequently, it is demonstrated that such a simple linear transformation can be *generalized* across different diffusion models. Experimental results show that the proposed method achieves satisfying generalization performance, significantly surpassing similarity-based methods (+31.6% mAP), even those with generalization designs. The project is available at https://id2icml.github.io.
Lay Summary: Modern AI tools can now rewrite any picture just by following a short text instruction—turning a daytime street into a rainy night scene, or adding new objects that never existed. While fun and useful, this power makes it easy to spread fake images, dodge copyright rules, and hide the true source of a picture. We tackle this problem by asking a simple but crucial question: given an edited image, can we reliably find the original photo it came from?
Our new task, called ID² (Origin IDentification for text-guiding Image-to-image Diffusion), shows why earlier “look-for-similar-parts” tricks break down: different AI editors leave very different fingerprints, so a system trained on one often fails on another. To fix this, we built OriPID, the first large benchmark that pairs thousands of originals with their AI-altered versions from many diffusion models. We then prove that a single linear tweak to the images’ hidden VAE features can pull each edited picture back toward its source—and that this tweak works across editors. In tests, our lightweight method beats previous similarity-based approaches by over 31 percentage points in mean average precision, bringing practical image provenance a big step closer. Code and data: id2icml.github.io.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://id2icml.github.io
Primary Area: Applications->Computer Vision
Keywords: Diffusion Models, Origin Identification
Submission Number: 9348
Loading