TL;DR: A new benchmark of math olympiad problems for evaluating LLMs at constructive proofs
Abstract: While Large Language Models (LLMs) demonstrate impressive performance in mathematics, existing math benchmarks come with significant limitations. Many focus on problems with fixed ground-truth answers, and are often saturated due to problem simplicity or the viability of guessing or memorization. Crucially, they capture only a narrow subset of relevant math problems. To address this research gap, we introduce MathConstruct, a new benchmark of 127 challenging problems sourced from various math competitions, which targets *constructive proofs*, a widely encountered problem type requiring the construction of mathematical objects with specific properties. These proofs are particularly suitable for LLM evaluation, as solution correctness can be easily verified. Our automated verifiers also enable MathConstruct to generate problem variations, used to evaluate robustness. State-of-the-art LLMs solve only 41\% of MathConstruct problems, highlighting its complexity and importance for LLM evaluation.
Lay Summary: Existing mathematics benchmarks are predominantly focused on problems with either unique answers or formal proofs.
We introduce new type of benchmark where the models have a goal to construct a mathematical object with certain properties (so-called constructive proofs). This new benchmark will allow evaluating LLMs at broader set of capabilities through which LLMs could help mathematicians.
Link To Code: https://github.com/eth-sri/mathconstruct
Primary Area: General Machine Learning->Evaluation
Keywords: LLM, math, evaluation, benchmark, reasoning
Submission Number: 11775
Loading