Data Portraits: Recording Foundation Model Training Data

Published: 26 Sept 2023, Last Modified: 02 Nov 2023NeurIPS 2023 Datasets and Benchmarks PosterEveryoneRevisionsBibTeX
Keywords: natural language processing, data documentation, dataset curation, documentation practices
TL;DR: We call for membership testing tools as a best practice for documenting large language model datasets, providing a lightweight and fast demonstration system on several corpora: the Pile and the Stack.
Abstract: Foundation models are trained on increasingly immense and opaque datasets. Even while these models are now key in AI system building, it can be difficult to answer the straightforward question: has the model already encountered a given example during training? We therefore propose a widespread adoption of Data Portraits: artifacts that record training data and allow for downstream inspection. First we outline the properties of such an artifact and discuss how existing solutions can be used to increase transparency. We then propose and implement a solution based on data sketching, stressing fast and space efficient querying. Using our tools, we document a popular language modeling corpus (The Pile) and a recently released code modeling dataset (The Stack). We show that our solution enables answering questions about test set leakage and model plagiarism. Our tool is lightweight and fast, costing only 3% of the dataset size in overhead. We release a live interface of our tools at and call on dataset and model creators to release Data Portraits as a complement to current documentation practices.
Supplementary Material: pdf
Submission Number: 524