TL;DR: We construct MATH-P-Simple and MATH-P-Hard to benchmark LLM's math reasoning against simple and hard perturbations, and examine memorization issues.
Abstract: Large language models have demonstrated impressive performance on challenging mathematical reasoning tasks, which has triggered the discussion of whether the performance is achieved by true reasoning capability or memorization. To investigate this question, prior work has constructed mathematical benchmarks when questions undergo simple perturbations -- modifications that still preserve the underlying reasoning patterns of the solutions. However, no work has explored hard perturbations, which fundamentally change the nature of the problem so that the original solution steps do not apply. To bridge the gap, we construct MATH-P-Simple and MATH-P-Hard via simple perturbation and hard perturbation, respectively. Each consists of 279 perturbed math problems derived from level-5 (hardest) problems in the MATH dataset (Hendrycks et al., 2021). We observe significant performance drops on MATH-P-Hard across various models, including o1-mini (-16.49%) and gemini-2.0-flash-thinking (-12.9%). We also raise concerns about a novel form of memorization where models blindly apply learned problem-solving skills without assessing their applicability to modified contexts. This issue is amplified when using original problems for in-context learning. We call for research efforts to address this challenge, which is critical for developing more robust and reliable reasoning models. The project is available at https://math-perturb.github.io/.
Lay Summary: Large language models have recently shown impressive performance on challenging mathematical reasoning tasks. But are these models truly reasoning through the problems, or are they just repeating what they have seen during training?
We tested how these models perform when math problems are changed in ways that break the patterns they see during training. Specifically, we created two types of modified problems based on a popular math dataset:
- Simple perturbations: Slight changes that keep the problem-solving steps the same.
- Hard perturbations: Deeper changes that require a different solution strategy.
We observe significant performance drops on hard perturbations across various models, including OpenAI's o1-mini (-16.49%) and DeepMind's gemini-2.0-flash-thinking (-12.9%). Models often try to blindly apply learned strategies to problems even if they no longer work. Our findings highlight an important challenge in AI research: making sure models truly understand problems rather than just mimicking familiar patterns.
Link To Code: https://math-perturb.github.io/
Primary Area: Deep Learning->Large Language Models
Keywords: mathematical reasoning, benchmark, robustness
Submission Number: 13579
Loading