Keywords: benchmarking, evaluation, large language models, reasoning
TL;DR: In this paper we propose BeyondBench, a dynamic evaluation framework that generates algorithmic problems to measure genuine reasoning ability in language models rather than memorized patterns.
Abstract: Evaluating language models fairly is becoming harder as static benchmarks available on the internet risk contamination by training data. This makes it unclear whether models are truly reasoning or just recalling answers. In this paper, we introduce $\textbf{BeyondBench}$, an evaluation framework that avoids this problem by using $\textbf{algorithmic problem generation}$. Unlike traditional benchmarks that risk contamination from internet-scale training data, $\textbf{BeyondBench}$ creates mathematically grounded problems on the fly, ensuring each test remains fresh and uncontaminated. Our framework covers $\textbf{44 algorithmic tasks}$ with a total of $\textbf{117 variations}$, grouped into three difficulty levels: the $\textit{Easy Suite}$ (29 tasks) for basic arithmetic and statistics, the $\textit{Medium Suite}$ (5 tasks, 49 variations) for sequence patterns and reasoning, and the $\textit{Hard Suite}$ (10 tasks, 68 variations) tackling NP-complete and constraint satisfaction problems. Each task generates problems from a combinatorial space larger than $10^{15}$ unique instances, with solutions verified deterministically by mathematical proofs. We evaluated $\textbf{101 language models}$, including 85 open-source and 16 closed-source models, spanning sizes from 0.5B to 141B parameters and multiple quantization schemes. Our results show consistent reasoning deficiencies across model families, with performance degrading sharply as problem complexity increases from polynomial to exponential. In our Hard Suite evaluations, models such as Gemini-2.5-pro, Llama-3.3-70B, and Qwen2.5-72B achieved average accuracies of 56.38%, 26.91%, and 33.60%, respectively. Moreover, we observe that performance drops drastically without tool usage, with GPT-5, GPT-5-mini, and GPT-5-nano showing a $\textbf{decline}$ of 16.81% 28.05%, and 47.59% accuracy on the hard suite. The contamination resistance of $\textbf{BeyondBench}$ rests on three guarantees: (i) the problem space is vastly larger than any static dataset, (ii) every instance has a unique, verifiable solution, and (iii) isomorphic transformations generate semantically equivalent but syntactically new problems. $\textbf{BeyondBench}$ redefines reasoning evaluation through genuine algorithmic problem-solving capability, ensuring a fair and meaningful evaluation.
Primary Area: datasets and benchmarks
Submission Number: 16529
Loading