Keywords: agentic reasoning, search, tool use, benchmark, tool-augmented test-time scaling, revisits
Abstract: As LLMs are increasingly deployed as agents, agentic reasoning—the ability to combine tool use, especially search, and reasoning—becomes a critical skill.
However, it is hard to disentangle agentic reasoning when evaluated in complex environments and tasks. Current agent benchmarks often mix agentic reasoning with challenging math reasoning, expert-level knowledge, and other advanced capabilities.
To fill this gap, we build a novel benchmark, GSM-Agent, where an LLM agent is required to solve grade-school-level reasoning problems, but is only presented with the question in the prompt without the premises that contain the necessary information to solve the task, and needs to proactively collect that information using tools.
Although the original tasks are grade-school math problems, we observe that even frontier models like GPT-5 only achieve 67\% accuracy.
To understand and analyze the agentic reasoning patterns, we propose the concept of *agentic reasoning graph*: cluster the environment’s document embeddings into nodes, and map each tool call to its nearest node to build a reasoning path. Surprisingly, we identify that revisit, returning to a previously visited node after leaving--widely taken as a crucial pattern in static reasoning, is a missing ability for agentic reasoning among many models. Based on the insight, we propose a tool-augmented test-time scaling method to improve LLM's agentic reasoning performance by adding tools to encourage models to revisit. We expect our benchmark and the agentic reasoning framework to aid future studies of understanding and pushing the boundaries of agentic reasoning.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Submission Number: 21374
Loading