Estimating Contamination via Perplexity: Quantifying Memorisation in Language Model Evaluation

Published: 05 Mar 2025, Last Modified: 05 Mar 2025MLDPR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Data Contamination, Model Evaluation
TL;DR: We propose a novel and efficient method to quantify the memorization phenomena in LLMs evaluation.
Abstract:

Data contamination in model evaluation is getting increasingly prevalent as the massive training corpora of large language models often unintentionally include benchmark samples. Therefore, contamination analysis has became an inevitable part of reliable model evaluation. However, existing method of contamination analysis requires the access of the entire training data which is often confidential for recent models. This prevent the community to rigorously audit these models and conduct accurate assessment of their capability. In this paper, we propose a novel method to quantify contamination without the access of the full training set, that measure the extent of contamination with perplexity. Our analysis provides evidence of significant memorisation of recent foundation models in popular reading comprehension, summarisation benchmarks, while multiple choice appears less contaminated.

Submission Number: 6
Loading