Detecting and Tracing Dataset Misuse in Fine-Tuning Text-to-Image Models

Published: 2025, Last Modified: 15 Jan 2026IWQoS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Text-to-image synthesis has become highly popular for generating realistic and stylized images, often requiring fine-tuning generative models with domain-specific datasets for specialized tasks. However, these valuable datasets face risks of unauthorized usage and unapproved sharing, compromising the rights of the owners. We address the issue of dataset abuse during the fine-tuning of Stable Diffusion (SD) models for text-to-image (T2I) synthesis. We present a dataset watermarking framework designed to detect unauthorized usage and trace data leaks. Experiments demonstrate the framework's effectiveness, minimal impact on the dataset (only 2% of the data required to be modified for high detection accuracy), and ability to trace data leaks. Our results also highlight the transferability and robustness of the framework, proving its practical applicability in detecting dataset abuse. The code is available: https://github.com/inmbzpmdqv/treidzxyq.git
Loading