Knowledge Model Prompting Increases LLM Performance on Planning Tasks

ICLR 2026 Conference Submission16104 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models (LLMs), Procedural Tasks, Task-Method-Knowledge (TMK), Knowledge Representation, Hierarchical Task Decomposition, Prompt Engineering
TL;DR: Using the Task-Method-Knowledge (TMK) framework to structure prompts boosts the reasoning and planning of LLMs. By providing context on the "why, what, and how" of a task, this method improved model performances on the PlanBench benchmark.
Abstract: Large Language Models (LLM) often struggle with reasoning ability and procedural tasks. Many prompting techniques have been developed to assist with LLM reasoning, notably Chain-of-Thought (CoT), however these techniques too have come under scrutiny as LLMs ability to reason at all has come into question. Borrowing from the domain of education, this paper investigates whether the Task-Method-Knowledge (TMK) framework can improve LLM reasoning capabilities beyond its previously demonstrated success in educational applications. The TMK framework's unique ability to capture causal, teleological, and hierarchical reasoning structures, combined with its explicit task decomposition mechanisms, makes it particularly well-suited for addressing LLM reasoning deficiencies, and unlike other hierarchical frameworks such as HTN and BDI, TMK provides explicit representations of not just what to do and how to do it, but also why actions are taken. The study plans to evaluate its approach using the PlanBench benchmark, focusing on the blocksworld domain to test for reasoning and planning capabilities, examining whether TMK-structured prompting can help LLMs better decompose complex planning problems into manageable subtasks. Our research stands to bridge the gap between symbolic reasoning approaches and modern neural language models by establishing the conditions under which the TMK framework can enhance LLM reasoning to support broader applications in agentic systems.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 16104
Loading