Measuring Mathematical Problem Solving With the MATH DatasetDownload PDF

Published: 18 Oct 2021, Last Modified: 25 Nov 2024NeurIPS 2021 Datasets and Benchmarks Track (Round 2)Readers: Everyone
Keywords: math, transformers, reasoning, logic
TL;DR: To find the limits of Transformers, we collected 12,500 math problems. While a three-time IMO gold medalist got 90%, GPT-3 models got ~5%, with accuracy increasing slowly. If trends continue, ML models are far from achieving mathematical reasoning.
Abstract: Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations. To facilitate future research and increase accuracy on MATH, we also contribute a large auxiliary pretraining dataset which helps teach models the fundamentals of mathematics. Even though we are able to increase accuracy on MATH, our results show that accuracy remains relatively low, even with enormous Transformer models. Moreover, we find that simply increasing budgets and model parameter counts will be impractical for achieving strong mathematical reasoning if scaling trends continue. While scaling Transformers is automatically solving most other text-based tasks, scaling is not currently solving MATH. To have more traction on mathematical problem solving we will likely need new algorithmic advancements from the broader research community.
Supplementary Material: zip
URL: https://github.com/hendrycks/math
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 4 code implementations](https://www.catalyzex.com/paper/measuring-mathematical-problem-solving-with/code)
9 Replies

Loading