Keywords: Large Language Model, Data Contaminment, Deep Learning
TL;DR: We propose DVD, a variance-based detector that reliably identifies variant contamination in LLM evaluation, outperforming perplexity, edit-distance, and similarity baselines across datasets and models.
Abstract: Evaluating large language models (LLMs) is increasingly confounded by variant contamination: the training corpus contains semantically equivalent yet lexically or syntactically altered versions of test items. Unlike verbatim leakage, these paraphrased or structurally transformed variants evade existing detectors based on sampling consistency or perplexity, thereby inflating benchmark scores via memorization rather than genuine reasoning. We formalize this problem and introduce DVD (Detection via Variance of generation Distribution), a single-sample detector that models the local output distribution induced by temperature sampling. Our key insight is that contaminated items trigger alternation between a memory-adherence state and a perturbation-drift state, yielding abnormally high variance in the synthetic difficulty of low-probability tokens; uncontaminated items remain in drift with comparatively smooth variance. We construct the first benchmark for variant contamination across two domains—Omni-MATH and SuperGPQA—by generating and filtering semantically equivalent variants, and simulate contamination via fine-tuning models of different scales and architectures (Qwen2.5 and Llama3.1). Across datasets and models, DVD consistently outperforms perplexity-based, Min-k% probability, edit-distance (CDD), and embedding-similarity baselines, while exhibiting strong robustness to hyperparameters. Our results establish variance of the generation distribution as a principled and practical fingerprint for detecting variant contamination in LLM evaluation.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 17020
Loading