Keywords: large language models, skills, options, llm, llm agents, interactive tasks, planning, abstraction, hierarchical, embodied agents
TL;DR: We show that prompting LLMs with skills, or descriptions of shared behaviors that are useful solving tasks, can improve performance on interactive tasks.
Abstract: Complex sequential decision-making tasks often require hierarchical thinking and abstraction: breaking down these tasks into simpler subtasks that can be solved with reusable behaviors, or *skills*.
In this work, we show that large language models (LLMs) can benefit from using skill abstractions to solve interactive tasks successfully.
We propose a simple prompting approach named **SkillAct**, which can extend existing prompting approaches.
In addition, we demonstrate that these skill abstractions can be *learned* from few-shot demonstrations by prompting LLMs.
We demonstrate that **SkillAct** improves the performance of existing approaches such as ReAct on the interactive task benchmark ALFWorld.
Submission Number: 39
Loading