Abstract: The evaluation of mathematical reasoning capabilities constitutes a critical pathway toward achieving Artificial General Intelligence (AGI). Prevailing benchmarks including MATH and AIME mainly feature single-instantiation problems with fixed numbers, permitting pattern matching instead of principled deductive reasoning and leaving generalization on isomorphic problem variants untested.
To address these limitations, we propose the UTMath Benchmark, employing rigorous unit testing methodology that simultaneously quantifies solution accuracy and solution space generality.
It comprises 1,053 problems spanning 9 mathematical domains, each accompanied by an average of 68 varied test cases.
With $10^7$ answer possibilities per problem on average, UTMath sets new standards for robust reasoning while preventing memorization.
UTMath is highly challenging, with the best-performing model, o1-mini, solving only 32.57\% of the problems, followed by o1-preview at 27.16\%, and GPT-4o at 26.93\%.
We further propose Reasoning-to-Code Thoughts (RCoT), a prompting strategy that decouples symbolic reasoning from code synthesis. RCoT guides LLMs to first derive formal reasoning structures before generating executable code, producing generalizable solutions rather than situation-specific answers.
To help the community push mathematical reasoning further, we release UTMath-Train (70k samples), a companion training set generated under the same protocol.
Our benchmark can be accessed via the following link: \href{https://anonymous.4open.science/r/UTMath-3356}{UTMath}
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Language Modeling, Resources and Evaluation
Contribution Types: Model analysis & interpretability, Data resources, Data analysis
Languages Studied: English
Keywords: Language Modeling, Resources and Evaluation
Submission Number: 5212
Loading