From Abstract to Contextual: What LLMs Still Cannot Do in Mathematics

ICLR 2026 Conference Submission15956 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Mathematical Reasoning, Evaluation
Abstract: Large language models now solve many benchmark math problems at near‑expert levels, yet this progress has not fully translated into reliable performance in real‑world applications. We study this gap through contextual mathematical reasoning, where the mathematical core must be formulated from descriptive scenarios.We introduce CORE-MATH, a benchmark that repurposes AIME and MATH-500 problems into two contextual settings: Scenario Grounding (SG), which embeds abstract problems into realistic narratives without increasing reasoning complexity, and Complexity Scaling (CS), which transforms explicit conditions into sub‑problems to capture how constraints often appear in practice. Evaluating 61 proprietary and open‑source models, we observe sharp drops: on average, open‑source models decline by 13 and 34 points on SG and CS, while proprietary models drop by 13 and 20. Error analysis shows that errors are dominated by incorrect problem formulation, with formulation accuracy declining as original problem difficulty increases. Correct formulation emerges as a prerequisite for success, and its sufficiency improves with model scale, indicating that larger models advance in both understanding and reasoning. Nevertheless, formulation and reasoning remain two complementary bottlenecks that limit contextual mathematical problem solving. Finally, we find that fine‑tuning with scenario data improves performance, whereas formulation‑only training is ineffective. However, performance gaps are only partially alleviated, highlighting contextual mathematical reasoning as a central unsolved challenge for LLMs.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 15956
Loading