UGPhysics: A Comprehensive Benchmark for Undergraduate Physics Reasoning with Large Language Models

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We introduce UGPhysics dataset along with MARJ answer assessment pipeline to advance AI for physics reasoning.
Abstract: Large language models (LLMs) have demonstrated remarkable capabilities in solving complex reasoning tasks, particularly in mathematics. However, the domain of physics reasoning presents unique challenges that have received significantly less attention. Existing benchmarks often fall short in evaluating LLMs’ abilities on the breadth and depth of undergraduate-level physics, underscoring the need for a comprehensive evaluation. To fill this gap, we introduce UGPhysics, a large-scale and diverse benchmark specifically designed to evaluate **U**nder**G**raduate-level **Physics** (**UGPhysics**) reasoning with LLMs. UGPhysics includes 5,520 undergraduate-level physics problems in both English and Chinese across 13 subjects with seven different answer types and four distinct physics reasoning skills, all rigorously screened for data leakage. Additionally, we develop a Model-Assistant Rule-based Judgment (**MARJ**) pipeline specifically tailored for assessing physics problems, ensuring accurate evaluation. Our evaluation of 31 leading LLMs shows that the highest overall accuracy, 49.8% (achieved by OpenAI-o1-mini), emphasizes the need for models with stronger physics reasoning skills, beyond math abilities. We hope UGPhysics, along with MARJ, will drive future advancements in AI for physics reasoning. Codes and data are available at \href{https://github.com/YangLabHKUST/UGPhysics}{https://github.com/YangLabHKUST/UGPhysics}.
Lay Summary: Large language models (LLMs) like ChatGPT have shown impressive reasoning abilities, especially in solving math problems. But physics, a subject that combines math with a deep understanding of the physical world, is a tougher challenge. To see how well LLMs handle this, we created **UGPhysics**, a large and diverse set of over 5,500 carefully crafted undergraduate-level physics questions written in English and Chinese. These questions span 13 physics subjects and test different skills, from understanding concepts to applying formulas. To fairly evaluate model responses, the researchers also built a specialized grading system called **MARJ** that blends human-like judgment with automated rules. When testing 31 top-performing language models, the highest score was just under 50%, showing that current models have much room to improve in physics reasoning: they often struggle even if they do well in math. The UGPhysics dataset and MARJ evaluation system are now released, helping researchers develop AI that can better understand and solve physics problems — a key step toward more capable AI.
Link To Code: https://github.com/YangLabHKUST/UGPhysics
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, benchmark and dataset, physics reasoning
Submission Number: 262
Loading