Position: AI Competitions Provide the Gold Standard for Empirical Rigor in GenAI Evaluation

Published: 01 May 2025, Last Modified: 23 Jul 2025ICML 2025 Position Paper Track oralEveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this position paper, we observe that empirical evaluation in Generative AI is at a crisis point since traditional ML evaluation and benchmarking strategies are insufficient to meet the needs of evaluating modern GenAI models and systems. There are many reasons for this, including the fact that these models typically have nearly unbounded input and output spaces, typically do not have a well defined ground truth target, and typically exhibit strong feedback loops and prediction dependence based on context of previous model outputs. On top of these critical issues, we argue that the problems of *leakage* and *contamination* are in fact the most important and difficult issues to address for GenAI evaluations. Interestingly, the field of AI Competitions has developed effective measures and practices to combat leakage for the purpose of counteracting cheating by bad actors within a competition setting. This makes AI Competitions an especially valuable (but underutilized) resource. Now is time for the field to view AI Competitions as the gold standard for empirical rigor in GenAI evaluation, and to harness and harvest their results with according value.
Lay Summary: Evaluating Generative AI (GenAI) models like ChatGPT and Gemini poses unique challenges that traditional machine learning benchmarking methods don’t address well. It’s hard to know whether a model truly understands a task or has simply "memorized" information from its training data. GenAI models need to be tested on novel tasks to reliably measure their capabilities, like humans tested on exams they haven’t seen before. These models are exposed to internet-scale data, sometimes including common research benchmarks, which exacerbates how hard it is to test using novel tasks. In response to this issue—known as "contamination” or “leakage”—the AI research community urgently needs new, more reliable ways to measure GenAI capabilities. We argue AI Competitions, such as those hosted on platforms like Kaggle, offer two key advantages. First, they provide a continuous stream of novel tasks for testing GenAI model capabilities. Second, they are specifically designed to mitigate leakage risks by restricting network access and leveraging parallel development by thousands of independent teams. Given these benefits, we propose that AI Competitions should be adopted as the "gold standard" for rigorously evaluating GenAI models. This is crucial for the AI industry, as effective evaluation is essential for guiding development and improvement of GenAI models across diverse applications.
Verify Author Names: My co-authors have confirmed that their names are spelled correctly both on OpenReview and in the camera-ready PDF. (If needed, please update ‘Preferred Name’ in OpenReview to match the PDF.)
No Additional Revisions: I understand that after the May 29 deadline, the camera-ready submission cannot be revised before the conference. I have verified with all authors that they approve of this version.
Pdf Appendices: My camera-ready PDF file contains both the main text (not exceeding the page limits) and all appendices that I wish to include. I understand that any other supplementary material (e.g., separate files previously uploaded to OpenReview) will not be visible in the PMLR proceedings.
Latest Style File: I have compiled the camera ready paper with the latest ICML2025 style files <https://media.icml.cc/Conferences/ICML2025/Styles/icml2025.zip> and the compiled PDF includes an unnumbered Impact Statement section.
Paper Verification Code: ZjQ1O
Permissions Form: pdf
Primary Area: Research Priorities, Methodology, and Evaluation
Keywords: benchmarking, evaluation, competitions, leakage, contamination
Submission Number: 525
Loading