Tracing the Misuse of Personalized Textual Embeddings for Text-to-Image Models

Published: 06 Mar 2025, Last Modified: 01 Apr 2025ICLR 2025 Workshop Data Problems PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Text-to-Image model, Stable Diffusion, Textual Inversion, Misuse tracing, IP protection
Abstract: Text-to-Image (T2I) models have achieved great success in generating high-quality images with diverse prompts. The emerging personalized textual embedding technology further empowers T2I models to create realistic images based on users' personalized concepts. This leads to a new AI business, with many commercial platforms for sharing or selling valuable personalized embeddings. However, this powerful technology comes with potential risks. Malicious users might exploit personalized textual embeddings to generate illegal content. To address this concern, these public platforms need reliable methods to trace and hold bad actors accountable. In this paper, we introduce concept watermarking, a novel approach that embeds robust watermarks into images generated from personalized embeddings. Specifically, an encoder embeds watermarks in the embedding space, while a decoder extracts these watermarks from generated images. We also develop a novel end-to-end training strategy that breaks down the diffusion model's sampling process to ensure effective watermarking. Extensive experiments demonstrate that our concept watermarking is effective for guarding personalized textual embeddings while guaranteeing their utility in terms of both visual fidelity and textual editability. More importantly, because the watermark exists at the concept level, it is robust against different processing distortions, diffusion sampling configurations, and adaptive attacks. Ablation studies are also conducted to validate the design rationale of each key component.
Submission Number: 48
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview