Keywords: multilingual benchmarks, benchmarking, evaluation methodologies, reasoning, automatic evaluation, games, zero-sum, long-horizon planning
Abstract: Benchmarks for language models have become essential tools for research. Yet, such benchmarks face a persistent contamination problem, with recent studies finding 25-50\% of evaluation datasets appearing in the training corpora. This is true even looking at the two-player zero-sum game setting, where most of the benchmarks are based on popular games, like chess, whose optimal strategies all over the web. Such contamination hinders the possibility to differentiate memorization and reasoning skills. To rectify these problems, we introduce TCG-Bench, a benchmark based on a new two-player trading card game (TCG), which is similar in spirit to games like Magic the Gathering. TCG-Bench offers three key innovations: (1) a contamination-resistant design, by separating the publicly released game engine from the hidden card implementations, (2) a continuous difficulty spectrum via Monte Carlo simulation that prevents benchmark saturation, and (3) a parallel implementation in English and Arabic, being the first multilingual text-based game benchmark to do so. Our analysis across 17 models (42,750+ games) reveals that performance declines exponentially with difficulty, while model size correlates only weakly with strategic ability. We also observe cross-linguistic performance gaps between English and Arabic, with a gap of 47.4\% at 32B, highlighting the need for multilingual game benchmarks that target reasoning capabilities in the target language.
Submission Number: 10
Loading