TL;DR: We present STAMP, a framework to detect whether a given dataset was used in LLM pretraining.
Abstract: Given how large parts of publicly available text are crawled to pretrain large language models (LLMs), data creators increasingly worry about the inclusion of their proprietary data for model training without attribution or licensing. Their concerns are also shared by benchmark curators whose test-sets might be compromised. In this paper, we present STAMP, a framework for detecting dataset membership—i.e., determining the inclusion of a dataset in the pretraining corpora of LLMs. Given an original piece of content, our proposal involves first generating multiple rephrases, each embedding a watermark with a unique secret key. One version is to be released publicly, while others are to be kept private. Subsequently, creators can compare model likelihoods between public and private versions using paired statistical tests to prove membership. We show that our framework can successfully detect contamination across four benchmarks which appear only once in the training data and constitute less than 0.001% of the total tokens, outperforming several contamination detection and dataset inference baselines. We verify that STAMP preserves both the semantic meaning and utility of the original data. We apply STAMP to two real-world scenarios to confirm the inclusion of paper abstracts and blog articles in the pretraining corpora.
Lay Summary: Much of the publicly available data on the internet is used to train language models (such as ChatGPT), but content creators have no way of knowing if their data was used (without their permission) for training such models. In this work, we provide a tool for creators to “stamp” their writing. For a piece of original text, our approach generates several versions of that text with slightly different wordings, all conveying the same meaning. One of these versions can be publicly published, but the rest are to be kept private. Later, if a creator suspects that a language model used their published content for training, they can run our simple test to determine if the language model shows a strong preference for the public version compared to the private ones. If this is indeed the case, it suggests that the language model was trained on their public content. Our approach is simple to use, and in this paper, we show it can successfully detect whether a piece of text was used for training language models, while preserving the meaning and utility of the original content.
Link To Code: https://github.com/codeboy5/stamp
Primary Area: Social Aspects
Keywords: LLM, membership inference, dataset inference, watermarking, test set contamination
Submission Number: 15721
Loading