A Comprehensive Survey of Contamination Detection Methods in Large Language Models

Published: 09 Jul 2025, Last Modified: 09 Jul 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: With the rise of Large Language Models (LLMs) in recent years, abundant new opportunities are emerging, but also new challenges, among which contamination is quickly becoming critical. Business applications and fundraising in Artificial Intelligence (AI) have reached a scale at which a few percentage points gained on popular question-answering benchmarks could translate into dozens of millions of dollars, placing high pressure on model integrity. At the same time, it is becoming harder and harder to keep track of the data that LLMs have seen; if not impossible with closed-source models like GPT-4 and Claude-3 not divulging any information on the training set. As a result, contamination becomes a major issue: LLMs’ performance may not be reliable anymore, as the high performance may be at least partly due to their previous exposure to the data. This limitation jeopardizes real capability improvement in the field of NLP, yet, there remains a lack of methods on how to efficiently detect contamination. In this paper, we survey all recent work on contamination detection with LLMs, analyzing their methodologies and use cases to shed light on the appropriate usage of contamination detection methods. Our work calls the NLP research community’s attention into systematically taking into account contamination bias in LLM evaluation.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We included the changes asked by the Action Editor: - Re-structuring of the Introduction, with a paragraph comparing this survey against each of the other surveys. We also do not call these other surveys to be concurrent anymore. - Expansion of the 5.4 Future Challenges subsection, especially the legal frameworks.
Assigned Action Editor: ~Alain_Durmus1
Submission Number: 4608
Loading