U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs

ICLR 2025 Conference Submission13807 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models (LLMs), Mathematical Reasoning, Benchmarking, University-Level Mathematics, Multimodal, Automatic Evaluation, Solution Assessment
TL;DR: U-MATH, a challenging university-level math benchmark with both textual and visual problems, and additional μ-MATH benchmark to evaluate solution assessment capabilities.
Abstract: The current evaluation of mathematical skills in LLMs is limited, as existing benchmarks are relatively small, primarily focus on elementary and high-school problems, or lack diversity in topics. Additionally, the inclusion of visual elements in tasks remains largely under-explored. To address these gaps, we introduce **U-MATH**, a novel benchmark of \textbf{1,100} unpublished open-ended university-level problems sourced from teaching materials. It is balanced across six core subjects, with \textbf{20\% of multimodal problems}. Given the open-ended nature of U-MATH problems, we employ an LLM to judge the correctness of generated solutions. To this end, we release **$\boldsymbol\mu$-MATH**, an dataset to evaluate the LLMs' capabilities in judging solutions. The evaluation of general domain, math-specific, and multimodal LLMs highlights the challenges presented by U-MATH. Our findings reveal that LLMs achieve a maximum accuracy of only 63\% on text-based tasks, with even lower 45\% on visual problems. The solution assessment proves challenging for LLMs, with the best LLM judge having an F1-score of 80\% on $\mu$-MATH. We open-source U-MATH, $\mu$-MATH, and evaluation code on GitHub.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13807
Loading