Abstract: The rapid development of large language models (LLMs) has transformed the landscape of natural language processing. Evaluating LLMs properly is crucial for understanding their potential and addressing concerns such as safety. However, LLM evaluation is confronted by various factors, among which contamination stands out as a key issue that undermines the reliability of evaluations. In this work, we introduce the concept of *contamination resistance* to address this challenge. We propose a benchmark based on Caesar ciphers (e.g., "ab" → "bc" when the shift is 1), which, despite its simplicity, is an excellent example of a contamination resistant benchmark. We test this benchmark on widely used LLMs under various settings, and we find that these models struggle with this benchmark when contamination is controlled. Our findings reveal issues in current LLMs and raise important questions regarding their true capabilities. Our work contributes to the development of contamination resistant benchmarks, enabling more rigorous LLM evaluation and offering insights into the true capabilities and limitations of LLMs.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: NLP datasets, automatic evaluation of datasets, evaluation methodologies, evaluation, contamination, LLM, model analysis, reasoning
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 4126
Loading