Position: Language model developers should report train-test overlap

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 Position Paper Track spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Language models are extensively evaluated, but correctly interpreting evaluation results requires knowledge of train-test overlap, which refers to the extent to which the language model is trained on the very data it is being tested on. The public currently lacks adequate information about train-test overlap: most models have no public train-test overlap statistics, and third parties cannot directly measure train-test overlap since they do not have access to the training data. To make this clear, we document the practices of 30 models, finding that just 9 models report train-test overlap: 4 models release training data under open-source licenses, enabling the community to directly measure train-test overlap, and 5 models publish their train-test overlap methodology and statistics. By engaging with language model developers, we provide novel information about train-test overlap for three additional models. Overall, this position paper argues that language model developers should publish train-test overlap statistics and/or training data whenever they report evaluation results on public test sets. We hope our work increases transparency into train-test overlap to increase the community-wide trust in model evaluations.
Lay Summary: Language models are computer programs that learn to understand and generate human language by studying huge amounts of text, like books or websites. To see how well they work, we test them with tasks like answering questions or finishing sentences. But to understand the results of these tests, we need to know how much of the test material that the model has seen during its training—this is called *train-test overlap*. The trouble is, we often don’t know if there’s overlap because most model creators don’t share this detail. In a review of 30 language models, only 9 provided details: 4 shared their training data openly for others to check, and 5 explained how they looked for overlap. This gap makes it tough to interpret the test results. Here our position is that model developers should always report train-test overlap details—either by sharing training data or reporting statistics on train-test overlap. This openness would help us trust language model tests more.
Primary Area: Data Set Creation, Curation, and Documentation
Keywords: train-test overlap, language model
Submission Number: 83
Loading