SkillWrapper: Skill Abstraction in the Era of Foundation Models

Published: 24 Oct 2024, Last Modified: 06 Nov 2024LEAP 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Skill Abstraction, Task Planning, Active Learning
TL;DR: We proposed a system to automatically characterize black-box robot skills with interpretable symbolic representations and use the representation to solve planing problems.
Abstract: We envision a future where robots will be equipped “out of the box” with composable and portable skills. However, the conditions in which these skills will successfully execute are not formalized in a way that lets robots autonomously compose skills, posing difficulties for robot programmers who operate such robots. Abstractions are a key requirement to enable robots to perform complex tasks. Often, domain experts hand-craft these abstractions, introducing bias from human intuition. Alternatively, computational approaches can be used to invent abstractions autonomously, but human programmers or end users may not be able to interpret the resulting abstractions. We present an approach for autonomously learning natural-language-interpretable abstractions. Our novel method learns symbolic representations for black-box robot skills, such as GoTo and PickUp, from a high-dimensional and unstructured input in the form of 2D images. Specifically, we use foundation models to propose exploratory sequences of skill executions, to invent symbolic predicates that disambiguate low-level state transitions, and to classify when said predicates hold in a given state. We present preliminary results in a simulated setting, demonstrating the feasibility of our method.
Submission Number: 53
Loading