Language Model as Planner and Formalizer under Constraints

ACL ARR 2026 January Submission6920 Authors

06 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: applications, code models, LLM/AI agents, neurosymbolic approaches, scaling, robustness
Abstract: LLMs have been widely used in planning, either as planners to generate action sequences end-to-end, or as formalizers to represent the planning domain and problem in a formal language that can derive plans deterministically. However, both lines of work rely on standard benchmarks that only include generic and simplistic environmental specifications, leading to potential overestimation of the planning ability of LLMs and safety concerns in downstream tasks. We bridge this gap by augmenting widely used planning benchmarks with manually annotated, fine-grained, and rich natural language constraints spanning four formally defined categories. Over 4 state-of-the-art reasoning LLMs, 4 formal languages, and 4 datasets, we show that the introduction of one-sentence constraints consistently halves performance, indicating current LLMs' lack of robustness and avenue for future research.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: applications, code models, LLM/AI agents, neurosymbolic approaches, scaling, robustness
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English, PDDL, LTL
Submission Number: 6920
Loading