DRIP: Decompositional Reasoning for agent Interpretable Planning

ACL ARR 2025 May Submission2592 Authors

19 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Research on LLM agents has shown remarkable progress, particularly in planning methods that leverage the reasoning capabilities of LLMs. However, challenges such as robustness and efficiency remain in LLM-based planning, with robustness, in particular, posing a significant barrier to real-world applications. In this study, we propose a framework that incorporates human reasoning abilities into planning. Specifically, this framework mimics the human ability to break down complex problems into simpler problems, enabling the decomposition of complex tasks into preconditions and subsequently deriving subtasks. The results of our evaluation experiments demonstrated that this human-like capability can be effectively applied to planning. Furthermore, the proposed framework exhibited superior robustness, offering new perspectives for LLM-based planning methods.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: AI/LLM Agents, robustness, Neurosymbolic Approaches to NLP
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: Japanese
Submission Number: 2592
Loading