SkillAct: Using Skill Abstractions Improves LLM Agents

Published: 18 Jun 2024, Last Modified: 26 Jul 2024ICML 2024 Workshop on LLMs and Cognition PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, skills, options, llm, llm agents, interactive tasks, planning, abstraction, hierarchical, embodied agents
TL;DR: We show that prompting LLMs with skills, or descriptions of shared behaviors that are useful solving tasks, can improve performance on interactive tasks.
Abstract: Complex sequential decision-making tasks often require hierarchical thinking and abstraction: breaking down these tasks into simpler subtasks that can be solved with reusable behaviors, or *skills*. In this work, we show that large language models (LLMs) can benefit from using skill abstractions to solve interactive tasks successfully. We propose a simple prompting approach named **SkillAct**, which can extend existing prompting approaches. In addition, we demonstrate that these skill abstractions can be *learned* from few-shot demonstrations by prompting LLMs. We demonstrate that **SkillAct** improves the performance of existing approaches such as ReAct on the interactive task benchmark ALFWorld.
Submission Number: 39
Loading