Planning With Large Language Models Via Corrective Re-PromptingDownload PDF

05 Oct 2022 (modified: 05 May 2023)FMDM@NeurIPS2022Readers: Everyone
Keywords: large-language models, planning, prompting, embodied AI
TL;DR: We propose a prompting-based strategy for extracting plans from a LLM that leverages a novel and readily-accessible source of information from environments: precondition errors
Abstract: Extracting knowledge from Large Language Models (LLM) offers a path to designing intelligent, embodied agents that takes advantage of the common sense knowledge present in large language datasets. Related works have queried LLMs with a wide-range of contextual information, such as goals, sensor observations and scene descriptions, to generate high-level action plans for a specific task. In this work, we propose a prompting-based strategy for extracting executable plans from a LLM that leverages a novel and readily-accessible source of information: precondition errors. Our approach assumes that actions are only afforded execution in certain contexts (i.e. implicit preconditions must be met for an action to execute), and that the embodied agent has the ability to determine if the action is not executable in the current context (e.g: a precondition error is present). When an agent is unable to execute an action in a plan, our approach re-prompts the LLM with precondition error information to extract a useful and executable action to achieve the intended goal in the current context. We evaluate our approach in the VirtualHome simulation environment on 88 different tasks and 7 scenes. We evaluate different prompt templates and compare to methods that naively re-sample actions from the LLM. We find that our approach using precondition errors improves the executability and semantic correctness of plans, while also reducing the number of corrective re-prompts for querying actions.
0 Replies

Loading