Abstract: Large language models (LLMs) have emerged as powerful tools for processing and generating human-like text, raising intriguing possibilities for their application in physics---a field characterized by complex mathematical formulations, abstract concepts, and precise reasoning. While recent studies have demonstrated LLMs' potential in physics applications, from automating simulations to enhancing physics education, the field lacks a systematic framework for understanding and advancing these efforts. This paper presents a comprehensive survey of LLM applications in physics, which examines four critical domains: physical simulation, knowledge discovery, physical reasoning, and physics education, revealing both promising advances and fundamental challenges. We introduce a systematic taxonomy that classifies approaches based on LLM utilization patterns: generic encoders, language generators, auxiliary modules, and autonomous agents. Our analysis uncovers common patterns across successful applications while identifying key limitations in current approaches. We also compile and analyze relevant benchmarks and datasets, providing a resource for evaluating LLM performance in physics tasks. Finally, we outline critical challenges and promising research directions, offering a roadmap for leveraging LLMs to advance both physics research and education.
Paper Type: Long
Research Area: Information Extraction
Research Area Keywords: Applications
Contribution Types: Surveys
Languages Studied: English
Submission Number: 742
Loading