Demystifying Scientific Problem-Solving in LLMs by Probing Knowledge and Reasoning

ICLR 2026 Conference Submission22637 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reasoning, knowledge tracing/discovering/inducing, applications
TL;DR: We conduct a systematic study to examine reasoning and knowledge synergy in scientific problem-solving. We show that reasoning LLMs can be bottlenecked by domain knowledge, while reasoning-fine-tuning can help models surface relevant knowledge.
Abstract: Scientific problem solving poses unique challenges for LLMs, requiring both deep domain knowledge and the ability to apply such knowledge through complex reasoning. While automated scientific reasoners hold great promise for assisting human scientists, there is currently no widely adopted holistic benchmark for evaluating scientific reasoning, and few approaches systematically disentangle the distinct roles of knowledge and reasoning in these tasks. To address these gaps, we introduce **SciReas**, a diverse suite of existing benchmarks for scientific reasoning tasks, and **SciReas-Pro**, a selective subset that requires more complex reasoning. Our holistic evaluation surfaces insights about scientific reasoning performance that remain hidden when relying on individual benchmarks alone. We then propose **KRUX**, a probing framework for studying the distinct roles of reasoning and knowledge in scientific tasks. Combining the two, we conduct an in-depth analysis that yields several key findings: (1) Retrieving task-relevant knowledge from model parameters is a critical bottleneck for LLMs in scientific reasoning; (2) Reasoning models consistently benefit from external knowledge added in-context on top of the reasoning enhancement; (3) Enhancing verbalized reasoning improves LLMs' ability to surface task-relevant knowledge.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 22637
Loading