Abstract: If LLM training data is polluted with benchmark
test data, then benchmark performance gives biased estimates of out-of-distribution (OOD) generalization. Typical ‘decontamination’ filters use
n-gram matching which fail to detect ‘semantic’
duplicates: sentences with equivalent (or nearequivalent) content that are not close in string
space. We study this ‘soft’ contamination of
training data by semantic duplicates. Among
other experiments, we embed the Olmo3 training
corpus and find that: 1) contamination remains
widespread, e.g. we find semantic duplicates for
78% of CodeForces and exact duplicates for 50%
of ZebraLogic problems; 2) including semantic
duplicates of benchmark data in training does
improve benchmark performance; and 3) when
finetuning on duplicates of benchmark datapoints,
performance also improves on truly-held-out datapoints from the same benchmark. We argue that
recent benchmark gains are thus confounded: the
prevalence of soft contamination means gains reflect both genuine capability improvements and
the accumulation of test data and effective test
data in growing training corpora.
Loading