Holistic Prompting: Joint Reasoning with Reusable States and Shortcut Discovery

ICLR 2026 Conference Submission20431 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language model, general problem solving, heuristic search, reasoning, planning, reuse
TL;DR: We enable LLMs to reuse their intermediate solutions to related samples for specific problem classes
Abstract: Large Language Models (LLMs) have demonstrated significant capabilities in complex reasoning tasks, often employing frameworks like Tree of Thoughts (ToT) and Chain-of-Thought (CoT). However, such methods typically rely on trajectory-based state representations, where each state encapsulates the entire history of reasoning steps. For tasks with inherent solution spaces where overlapping reasoning paths frequently emerge, this inherently restricts the reuse of intermediate computations, leading to redundant exploration. The effect is even more dramatic when samples can be solved jointly, as overlapping reasoning paths frequently occur across different problem instances. We present Holistic Prompting, a novel framework that empowers LLMs to reuse intermediate results both within and across problem instances. Designed for tasks that exhibit reusable sub-structures, Holistic Prompting unifies shared access to intermediate thoughts with an active shortcut-discovery mechanism, enabling focused search between unsolved and solved subproblems and aggressively pruning reasoning paths. Our experiments show that reuse is highly profitable in ToT-style breadth-first search on the math puzzle Game24 and in AlphaZero-style Monte-Carlo tree search in retrosynthetic planning. Here, Holistic Prompting achieves both higher success rates, while at the same time requiring fewer model invocations and outputs.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 20431
Loading