Keywords: large language model, hierarchical task network, interactive task learning
Abstract: Hierarchical Task Networks (HTNs) provide an interpretable, structured framework for problem-solving but are often brittle, relying on a fixed set of predefined operators and requiring numerous examples to learn new procedures. In contrast, Large Language Models (LLMs) offer generative flexibility but lack the reliability and transparency required for robust cognitive systems. This paper introduces L.E.A.R.N (Learning by Example Authoring and Reasoning Network), a hybrid cognitive architecture that integrates the strengths of both approaches. L.E.A.R.N utilizes an LLM to generate candidate solution traces and, when necessary, propose new primitive operators. This output is then verified and structured within an HTN, which grounds the knowledge and ensures correctness. This approach shifts the human's role from a demonstrator to a verifier, significantly reducing the authoring burden. Our experimental evaluation shows that L.E.A.R.N learns expert problem-solving skills, such as solving quadratic equations, faster and with fewer demonstrations than an HTN-only baseline, while still providing the explainability and reliability that purely generative models lack. The architecture represents a step toward more adaptive and flexible cognitive systems.
Paper Track: Technical paper
Submission Number: 48
Loading