An Investigation of Robustness of LLMs in Mathematical Reasoning: Benchmarking with Mathematically-Equivalent Transformation of Advanced Mathematical Problems

ICLR 2026 Conference Submission9633 Authors

17 Sept 2025 (modified: 26 Jan 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: mathematical reasoning, benchmark, language model, robustness, generalization
TL;DR: We introduce the GAP framework and 6,306‑item PutnamGAP benchmark, showing that even top LLMs suffer large accuracy drops when math-equivalent problems are reworded or reparametrized.
Abstract: In this paper, we introduce a systematic framework beyond conventional method to assess LLMs’ mathematical‑reasoning robustness by stress‑testing them on advanced math problems that are mathematically equivalent but with linguistic and parametric variation. These transformations allow us to measure the sensitivity of LLMS to non-mathematical perturbations, thereby enabling a more accurate evaluation of their mathematical reasoning capabilities. Using this new evaluation methodology, we created PutnamGAP, a new benchmark dataset with multiple mathematically-equivalent variations of competition-level math problems. With the new dataset, we evaluate multiple families of representative LLMs and examine their robustness. Across 17 commercial and open-source models we observe sharp performance degradation on the variants. OpenAI's flagship reasoning model, O3, scores 49 % on the originals but drops by 4 percentage points on surface variants, and by 10.5 percentage points on core-step-based variants, while smaller models fare far worse. Overall, the results show that the proposed new evaluation methodology is effective for deepening our understanding of the robustness of LLMs and generating new insights for further improving their mathematical reasoning capabilities.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 9633
Loading