Progress over Points: Reframing LM Benchmarks Around Scientific Objectives

Published: 24 Sept 2025, Last Modified: 24 Sept 2025NeurIPS 2025 LLM Evaluation Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: progress-oriented benchmarks, open-ended evaluation, evaluation environments, LLMs, test-time scaling, benchmark design
TL;DR: In this paper, we argue for a shift from static, puzzle-like benchmarking to progress-oriented, open-ended evaluation environments whose objectives themselves are the core targets of scientific progress.
Abstract: Current benchmarks that test LLMs on static, already-solved problems (e.g., math word problems) effectively demonstrated basic capability acquisition. The natural progression has been toward larger, more comprehensive and challenging collections of static problems, an approach that inadvertently constrains the kinds of advances we can measure and incentivize. To address this limitation, we argue for progress-oriented benchmarks, problem environments whose objectives are themselves the core targets of scientific progress, so that achieving state of the art on the benchmark *advances the field*. As an introductory step, we instantiate an environment based on the NanoGPT speedrun. The environment standardizes a dataset slice, a reference model and training harness, and rich telemetry, with run-time verification and anti-gaming checks. Evaluation centers on the scientific delta achieved: best-attained loss and the efficiency frontier. Using this environment, we achieve a new state-of-the-art training time, improving upon the previous record by 3 seconds, and qualitatively observe the emergence of novel algorithmic ideas. Moreover, comparisons between models and agents remain possible, but they are a **means**, not the **end**; the benchmark’s purpose is to catalyze reusable improvements to the language modeling stack. With this release, the overarching goal is to seed a community shift from static problem leaderboards to test-time research on open-ended yet measurable scientific problems. In this new paradigm, progress on the benchmark is progress on the science, thus reframing "benchmarking" as a vehicle for scientific advancement. Code available at [https://anonymous.4open.science/r/open-ended-benchmarks-private-DC26](https://anonymous.4open.science/r/open-ended-benchmarks-private-DC26)
Submission Number: 91
Loading